I0308 17:04:44.015069 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0308 17:04:44.015254 7 e2e.go:124] Starting e2e run "77395332-c807-49dd-b13a-65a54fd507de" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583687083 - Will randomize all specs Will run 275 of 4993 specs Mar 8 17:04:44.070: INFO: >>> kubeConfig: /root/.kube/config Mar 8 17:04:44.072: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 8 17:04:44.102: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 8 17:04:44.132: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 8 17:04:44.132: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 8 17:04:44.132: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 8 17:04:44.147: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 8 17:04:44.147: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 8 17:04:44.147: INFO: e2e test version: v1.19.0-alpha.0.709+672aa55ee4860a Mar 8 17:04:44.149: INFO: kube-apiserver version: v1.17.0 Mar 8 17:04:44.149: INFO: >>> kubeConfig: /root/.kube/config Mar 8 17:04:44.153: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:04:44.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Mar 8 17:04:44.230: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-k2nf STEP: Creating a pod to test atomic-volume-subpath Mar 8 17:04:44.253: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-k2nf" in namespace "subpath-8447" to be "Succeeded or Failed" Mar 8 17:04:44.258: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.655084ms Mar 8 17:04:46.262: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 2.008612593s Mar 8 17:04:48.266: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 4.012397261s Mar 8 17:04:50.269: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 6.01596148s Mar 8 17:04:52.273: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 8.01974844s Mar 8 17:04:54.277: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 10.023983509s Mar 8 17:04:56.281: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 12.027983541s Mar 8 17:04:58.285: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 14.032103108s Mar 8 17:05:00.289: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 16.035778992s Mar 8 17:05:02.293: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 18.039209039s Mar 8 17:05:04.297: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Running", Reason="", readiness=true. Elapsed: 20.043271481s Mar 8 17:05:06.300: INFO: Pod "pod-subpath-test-downwardapi-k2nf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.046999402s STEP: Saw pod success Mar 8 17:05:06.300: INFO: Pod "pod-subpath-test-downwardapi-k2nf" satisfied condition "Succeeded or Failed" Mar 8 17:05:06.302: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-k2nf container test-container-subpath-downwardapi-k2nf: STEP: delete the pod Mar 8 17:05:06.331: INFO: Waiting for pod pod-subpath-test-downwardapi-k2nf to disappear Mar 8 17:05:06.335: INFO: Pod pod-subpath-test-downwardapi-k2nf no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-k2nf Mar 8 17:05:06.335: INFO: Deleting pod "pod-subpath-test-downwardapi-k2nf" in namespace "subpath-8447" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:05:06.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8447" for this suite. • [SLOW TEST:22.196 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:05:06.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-1072 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1072 to expose endpoints map[] Mar 8 17:05:06.453: INFO: Get endpoints failed (14.828009ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 8 17:05:07.456: INFO: successfully validated that service endpoint-test2 in namespace services-1072 exposes endpoints map[] (1.018487019s elapsed) STEP: Creating pod pod1 in namespace services-1072 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1072 to expose endpoints map[pod1:[80]] Mar 8 17:05:09.500: INFO: successfully validated that service endpoint-test2 in namespace services-1072 exposes endpoints map[pod1:[80]] (2.037421215s elapsed) STEP: Creating pod pod2 in namespace services-1072 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1072 to expose endpoints map[pod1:[80] pod2:[80]] Mar 8 17:05:11.578: INFO: successfully validated that service endpoint-test2 in namespace services-1072 exposes endpoints map[pod1:[80] pod2:[80]] (2.07251371s elapsed) STEP: Deleting pod pod1 in namespace services-1072 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1072 to expose endpoints map[pod2:[80]] Mar 8 17:05:12.613: INFO: successfully validated that service endpoint-test2 in namespace services-1072 exposes endpoints map[pod2:[80]] (1.027802898s elapsed) STEP: Deleting pod pod2 in namespace services-1072 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1072 to expose endpoints map[] Mar 8 17:05:13.623: INFO: successfully validated that service endpoint-test2 in namespace services-1072 exposes endpoints map[] (1.005652926s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:05:13.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1072" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:7.333 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":2,"skipped":50,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:05:13.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2480 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 17:05:13.717: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 17:05:13.776: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 17:05:15.780: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:05:17.781: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:05:19.780: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:05:21.779: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:05:23.779: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:05:25.781: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 17:05:25.786: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 17:05:27.790: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 17:05:29.790: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 17:05:31.861: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.103:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2480 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:05:31.861: INFO: >>> kubeConfig: /root/.kube/config I0308 17:05:31.900121 7 log.go:172] (0xc002d36c60) (0xc001bbd180) Create stream I0308 17:05:31.900151 7 log.go:172] (0xc002d36c60) (0xc001bbd180) Stream added, broadcasting: 1 I0308 17:05:31.905830 7 log.go:172] (0xc002d36c60) Reply frame received for 1 I0308 17:05:31.905906 7 log.go:172] (0xc002d36c60) (0xc001bbd2c0) Create stream I0308 17:05:31.905933 7 log.go:172] (0xc002d36c60) (0xc001bbd2c0) Stream added, broadcasting: 3 I0308 17:05:31.908378 7 log.go:172] (0xc002d36c60) Reply frame received for 3 I0308 17:05:31.908423 7 log.go:172] (0xc002d36c60) (0xc001bbd360) Create stream I0308 17:05:31.908438 7 log.go:172] (0xc002d36c60) (0xc001bbd360) Stream added, broadcasting: 5 I0308 17:05:31.909407 7 log.go:172] (0xc002d36c60) Reply frame received for 5 I0308 17:05:31.966916 7 log.go:172] (0xc002d36c60) Data frame received for 3 I0308 17:05:31.966959 7 log.go:172] (0xc001bbd2c0) (3) Data frame handling I0308 17:05:31.966992 7 log.go:172] (0xc001bbd2c0) (3) Data frame sent I0308 17:05:31.967179 7 log.go:172] (0xc002d36c60) Data frame received for 5 I0308 17:05:31.967218 7 log.go:172] (0xc001bbd360) (5) Data frame handling I0308 17:05:31.967247 7 log.go:172] (0xc002d36c60) Data frame received for 3 I0308 17:05:31.967263 7 log.go:172] (0xc001bbd2c0) (3) Data frame handling I0308 17:05:31.968846 7 log.go:172] (0xc002d36c60) Data frame received for 1 I0308 17:05:31.968867 7 log.go:172] (0xc001bbd180) (1) Data frame handling I0308 17:05:31.968880 7 log.go:172] (0xc001bbd180) (1) Data frame sent I0308 17:05:31.968916 7 log.go:172] (0xc002d36c60) (0xc001bbd180) Stream removed, broadcasting: 1 I0308 17:05:31.968944 7 log.go:172] (0xc002d36c60) Go away received I0308 17:05:31.969238 7 log.go:172] (0xc002d36c60) (0xc001bbd180) Stream removed, broadcasting: 1 I0308 17:05:31.969258 7 log.go:172] (0xc002d36c60) (0xc001bbd2c0) Stream removed, broadcasting: 3 I0308 17:05:31.969264 7 log.go:172] (0xc002d36c60) (0xc001bbd360) Stream removed, broadcasting: 5 Mar 8 17:05:31.969: INFO: Found all expected endpoints: [netserver-0] Mar 8 17:05:31.972: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.159:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2480 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:05:31.972: INFO: >>> kubeConfig: /root/.kube/config I0308 17:05:32.004149 7 log.go:172] (0xc002cb6d10) (0xc002d29a40) Create stream I0308 17:05:32.004172 7 log.go:172] (0xc002cb6d10) (0xc002d29a40) Stream added, broadcasting: 1 I0308 17:05:32.006626 7 log.go:172] (0xc002cb6d10) Reply frame received for 1 I0308 17:05:32.006655 7 log.go:172] (0xc002cb6d10) (0xc001bbd4a0) Create stream I0308 17:05:32.006666 7 log.go:172] (0xc002cb6d10) (0xc001bbd4a0) Stream added, broadcasting: 3 I0308 17:05:32.007585 7 log.go:172] (0xc002cb6d10) Reply frame received for 3 I0308 17:05:32.007627 7 log.go:172] (0xc002cb6d10) (0xc002d29ae0) Create stream I0308 17:05:32.007644 7 log.go:172] (0xc002cb6d10) (0xc002d29ae0) Stream added, broadcasting: 5 I0308 17:05:32.008593 7 log.go:172] (0xc002cb6d10) Reply frame received for 5 I0308 17:05:32.071690 7 log.go:172] (0xc002cb6d10) Data frame received for 3 I0308 17:05:32.071719 7 log.go:172] (0xc001bbd4a0) (3) Data frame handling I0308 17:05:32.071738 7 log.go:172] (0xc001bbd4a0) (3) Data frame sent I0308 17:05:32.071751 7 log.go:172] (0xc002cb6d10) Data frame received for 3 I0308 17:05:32.071762 7 log.go:172] (0xc001bbd4a0) (3) Data frame handling I0308 17:05:32.071868 7 log.go:172] (0xc002cb6d10) Data frame received for 5 I0308 17:05:32.071903 7 log.go:172] (0xc002d29ae0) (5) Data frame handling I0308 17:05:32.073510 7 log.go:172] (0xc002cb6d10) Data frame received for 1 I0308 17:05:32.073527 7 log.go:172] (0xc002d29a40) (1) Data frame handling I0308 17:05:32.073545 7 log.go:172] (0xc002d29a40) (1) Data frame sent I0308 17:05:32.073561 7 log.go:172] (0xc002cb6d10) (0xc002d29a40) Stream removed, broadcasting: 1 I0308 17:05:32.073576 7 log.go:172] (0xc002cb6d10) Go away received I0308 17:05:32.073727 7 log.go:172] (0xc002cb6d10) (0xc002d29a40) Stream removed, broadcasting: 1 I0308 17:05:32.073746 7 log.go:172] (0xc002cb6d10) (0xc001bbd4a0) Stream removed, broadcasting: 3 I0308 17:05:32.073763 7 log.go:172] (0xc002cb6d10) (0xc002d29ae0) Stream removed, broadcasting: 5 Mar 8 17:05:32.073: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:05:32.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2480" for this suite. • [SLOW TEST:18.397 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":58,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:05:32.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:05:32.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:05:34.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719283932, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719283932, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719283933, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719283932, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:05:37.995: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:05:38.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-86" for this suite. STEP: Destroying namespace "webhook-86-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.058 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":4,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:05:38.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:01.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5871" for this suite. • [SLOW TEST:23.420 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:01.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:05.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2946" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:05.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1395 STEP: creating an pod Mar 8 17:06:05.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5471 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 8 17:06:08.184: INFO: stderr: "" Mar 8 17:06:08.185: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Mar 8 17:06:08.185: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 8 17:06:08.185: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5471" to be "running and ready, or succeeded" Mar 8 17:06:08.205: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.568024ms Mar 8 17:06:10.209: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.024444369s Mar 8 17:06:10.209: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 8 17:06:10.209: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 8 17:06:10.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5471' Mar 8 17:06:10.326: INFO: stderr: "" Mar 8 17:06:10.326: INFO: stdout: "I0308 17:06:09.316645 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/h2ct 201\nI0308 17:06:09.516807 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/g64g 410\nI0308 17:06:09.716815 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/szn 359\nI0308 17:06:09.916774 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/2xmp 526\nI0308 17:06:10.116790 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/5hg 533\nI0308 17:06:10.316814 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/z86 267\n" STEP: limiting log lines Mar 8 17:06:10.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5471 --tail=1' Mar 8 17:06:10.423: INFO: stderr: "" Mar 8 17:06:10.423: INFO: stdout: "I0308 17:06:10.316814 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/z86 267\n" Mar 8 17:06:10.423: INFO: got output "I0308 17:06:10.316814 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/z86 267\n" STEP: limiting log bytes Mar 8 17:06:10.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5471 --limit-bytes=1' Mar 8 17:06:10.515: INFO: stderr: "" Mar 8 17:06:10.515: INFO: stdout: "I" Mar 8 17:06:10.515: INFO: got output "I" STEP: exposing timestamps Mar 8 17:06:10.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5471 --tail=1 --timestamps' Mar 8 17:06:10.592: INFO: stderr: "" Mar 8 17:06:10.592: INFO: stdout: "2020-03-08T17:06:10.516837709Z I0308 17:06:10.516744 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/h82n 419\n" Mar 8 17:06:10.592: INFO: got output "2020-03-08T17:06:10.516837709Z I0308 17:06:10.516744 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/h82n 419\n" STEP: restricting to a time range Mar 8 17:06:13.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5471 --since=1s' Mar 8 17:06:13.204: INFO: stderr: "" Mar 8 17:06:13.204: INFO: stdout: "I0308 17:06:12.316852 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/wnw 586\nI0308 17:06:12.516784 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/5ldg 585\nI0308 17:06:12.716773 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/c5ks 468\nI0308 17:06:12.916793 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/2z4x 422\nI0308 17:06:13.116775 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/7z6 299\n" Mar 8 17:06:13.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5471 --since=24h' Mar 8 17:06:13.282: INFO: stderr: "" Mar 8 17:06:13.282: INFO: stdout: "I0308 17:06:09.316645 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/h2ct 201\nI0308 17:06:09.516807 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/g64g 410\nI0308 17:06:09.716815 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/szn 359\nI0308 17:06:09.916774 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/2xmp 526\nI0308 17:06:10.116790 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/5hg 533\nI0308 17:06:10.316814 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/z86 267\nI0308 17:06:10.516744 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/h82n 419\nI0308 17:06:10.716750 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/6gv4 298\nI0308 17:06:10.916787 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/m9ng 213\nI0308 17:06:11.116868 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/6s6 295\nI0308 17:06:11.316800 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/4tb 201\nI0308 17:06:11.516849 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/498k 486\nI0308 17:06:11.716784 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/d2kq 365\nI0308 17:06:11.916822 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/6bl9 535\nI0308 17:06:12.116812 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/bhgb 462\nI0308 17:06:12.316852 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/wnw 586\nI0308 17:06:12.516784 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/5ldg 585\nI0308 17:06:12.716773 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/c5ks 468\nI0308 17:06:12.916793 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/2z4x 422\nI0308 17:06:13.116775 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/7z6 299\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 Mar 8 17:06:13.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5471' Mar 8 17:06:15.200: INFO: stderr: "" Mar 8 17:06:15.200: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:15.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5471" for this suite. • [SLOW TEST:9.498 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":7,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:15.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:06:15.873: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:06:18.913: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:06:18.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:20.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9224" for this suite. STEP: Destroying namespace "webhook-9224-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":8,"skipped":159,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:20.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 8 17:06:20.720: INFO: created pod pod-service-account-defaultsa Mar 8 17:06:20.720: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 8 17:06:20.727: INFO: created pod pod-service-account-mountsa Mar 8 17:06:20.727: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 8 17:06:20.733: INFO: created pod pod-service-account-nomountsa Mar 8 17:06:20.733: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 8 17:06:20.763: INFO: created pod pod-service-account-defaultsa-mountspec Mar 8 17:06:20.763: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 8 17:06:20.784: INFO: created pod pod-service-account-mountsa-mountspec Mar 8 17:06:20.784: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 8 17:06:20.815: INFO: created pod pod-service-account-nomountsa-mountspec Mar 8 17:06:20.815: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 8 17:06:20.839: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 8 17:06:20.839: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 8 17:06:20.847: INFO: created pod pod-service-account-mountsa-nomountspec Mar 8 17:06:20.847: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 8 17:06:20.900: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 8 17:06:20.900: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2519" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":9,"skipped":161,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:20.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:06:21.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be" in namespace "downward-api-5092" to be "Succeeded or Failed" Mar 8 17:06:21.095: INFO: Pod "downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be": Phase="Pending", Reason="", readiness=false. Elapsed: 55.530109ms Mar 8 17:06:23.113: INFO: Pod "downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074013776s Mar 8 17:06:25.117: INFO: Pod "downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077947113s STEP: Saw pod success Mar 8 17:06:25.117: INFO: Pod "downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be" satisfied condition "Succeeded or Failed" Mar 8 17:06:25.120: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be container client-container: STEP: delete the pod Mar 8 17:06:25.192: INFO: Waiting for pod downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be to disappear Mar 8 17:06:25.197: INFO: Pod downwardapi-volume-47b33563-a2ee-4567-a854-ed4c110696be no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:25.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5092" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":161,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:25.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:06:25.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd96f3e5-8749-460b-ba47-54989f7a6253" in namespace "projected-9030" to be "Succeeded or Failed" Mar 8 17:06:25.295: INFO: Pod "downwardapi-volume-cd96f3e5-8749-460b-ba47-54989f7a6253": Phase="Pending", Reason="", readiness=false. Elapsed: 30.026561ms Mar 8 17:06:27.298: INFO: Pod "downwardapi-volume-cd96f3e5-8749-460b-ba47-54989f7a6253": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033000275s STEP: Saw pod success Mar 8 17:06:27.298: INFO: Pod "downwardapi-volume-cd96f3e5-8749-460b-ba47-54989f7a6253" satisfied condition "Succeeded or Failed" Mar 8 17:06:27.300: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cd96f3e5-8749-460b-ba47-54989f7a6253 container client-container: STEP: delete the pod Mar 8 17:06:27.315: INFO: Waiting for pod downwardapi-volume-cd96f3e5-8749-460b-ba47-54989f7a6253 to disappear Mar 8 17:06:27.335: INFO: Pod downwardapi-volume-cd96f3e5-8749-460b-ba47-54989f7a6253 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:27.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9030" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:27.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:33.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5418" for this suite. • [SLOW TEST:6.077 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":12,"skipped":200,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:33.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-ea695137-1b82-4407-9d71-ce1bd28976a5 STEP: Creating a pod to test consume configMaps Mar 8 17:06:33.480: INFO: Waiting up to 5m0s for pod "pod-configmaps-05bdea8c-7fd9-4b9c-8fa5-19dce50a78d6" in namespace "configmap-8386" to be "Succeeded or Failed" Mar 8 17:06:33.517: INFO: Pod "pod-configmaps-05bdea8c-7fd9-4b9c-8fa5-19dce50a78d6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.417135ms Mar 8 17:06:35.520: INFO: Pod "pod-configmaps-05bdea8c-7fd9-4b9c-8fa5-19dce50a78d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039894438s STEP: Saw pod success Mar 8 17:06:35.520: INFO: Pod "pod-configmaps-05bdea8c-7fd9-4b9c-8fa5-19dce50a78d6" satisfied condition "Succeeded or Failed" Mar 8 17:06:35.523: INFO: Trying to get logs from node latest-worker pod pod-configmaps-05bdea8c-7fd9-4b9c-8fa5-19dce50a78d6 container configmap-volume-test: STEP: delete the pod Mar 8 17:06:35.555: INFO: Waiting for pod pod-configmaps-05bdea8c-7fd9-4b9c-8fa5-19dce50a78d6 to disappear Mar 8 17:06:35.563: INFO: Pod pod-configmaps-05bdea8c-7fd9-4b9c-8fa5-19dce50a78d6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:35.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8386" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:35.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 8 17:06:38.203: INFO: Successfully updated pod "annotationupdate5d0844ed-6997-4c98-8881-f6dbffec7f68" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:40.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6152" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:40.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-ce66f4f8-d0d1-4a7e-a525-36b551edb0d7 STEP: Creating a pod to test consume configMaps Mar 8 17:06:40.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-88adc064-be34-412c-a0e4-37144993006b" in namespace "configmap-181" to be "Succeeded or Failed" Mar 8 17:06:40.357: INFO: Pod "pod-configmaps-88adc064-be34-412c-a0e4-37144993006b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.62964ms Mar 8 17:06:42.373: INFO: Pod "pod-configmaps-88adc064-be34-412c-a0e4-37144993006b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018818835s STEP: Saw pod success Mar 8 17:06:42.373: INFO: Pod "pod-configmaps-88adc064-be34-412c-a0e4-37144993006b" satisfied condition "Succeeded or Failed" Mar 8 17:06:42.376: INFO: Trying to get logs from node latest-worker pod pod-configmaps-88adc064-be34-412c-a0e4-37144993006b container configmap-volume-test: STEP: delete the pod Mar 8 17:06:42.395: INFO: Waiting for pod pod-configmaps-88adc064-be34-412c-a0e4-37144993006b to disappear Mar 8 17:06:42.399: INFO: Pod pod-configmaps-88adc064-be34-412c-a0e4-37144993006b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:42.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-181" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":250,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:42.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:06:42.870: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:06:45.910: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:56.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2818" for this suite. STEP: Destroying namespace "webhook-2818-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.767 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":16,"skipped":253,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:56.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:06:56.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config version' Mar 8 17:06:56.359: INFO: stderr: "" Mar 8 17:06:56.359: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.709+672aa55ee4860a\", GitCommit:\"672aa55ee4860a8ae497c0523bbaf4ab9ac169a0\", GitTreeState:\"clean\", BuildDate:\"2020-03-08T16:42:48Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:06:56.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6228" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":17,"skipped":288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:06:56.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0308 17:07:06.498617 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 17:07:06.498: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:07:06.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3274" for this suite. • [SLOW TEST:10.139 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":18,"skipped":314,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:07:06.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-990ed070-7b82-49ea-a8c9-c814bc9c30f9 STEP: Creating a pod to test consume secrets Mar 8 17:07:06.596: INFO: Waiting up to 5m0s for pod "pod-secrets-615c3a4a-d252-4941-91f6-c3679bd9ad8f" in namespace "secrets-9735" to be "Succeeded or Failed" Mar 8 17:07:06.612: INFO: Pod "pod-secrets-615c3a4a-d252-4941-91f6-c3679bd9ad8f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028405ms Mar 8 17:07:08.615: INFO: Pod "pod-secrets-615c3a4a-d252-4941-91f6-c3679bd9ad8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019471751s STEP: Saw pod success Mar 8 17:07:08.615: INFO: Pod "pod-secrets-615c3a4a-d252-4941-91f6-c3679bd9ad8f" satisfied condition "Succeeded or Failed" Mar 8 17:07:08.617: INFO: Trying to get logs from node latest-worker pod pod-secrets-615c3a4a-d252-4941-91f6-c3679bd9ad8f container secret-volume-test: STEP: delete the pod Mar 8 17:07:08.631: INFO: Waiting for pod pod-secrets-615c3a4a-d252-4941-91f6-c3679bd9ad8f to disappear Mar 8 17:07:08.650: INFO: Pod pod-secrets-615c3a4a-d252-4941-91f6-c3679bd9ad8f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:07:08.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9735" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":318,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:07:08.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:07:08.691: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:07:09.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7783" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":20,"skipped":334,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:07:09.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 17:07:16.026: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 17:07:16.045: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 17:07:18.045: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 17:07:18.049: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 17:07:20.045: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 17:07:20.062: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 17:07:22.045: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 17:07:22.049: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 17:07:24.045: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 17:07:24.051: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:07:24.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9280" for this suite. • [SLOW TEST:14.158 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":349,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:07:24.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:07:24.151: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 8 17:07:24.157: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:24.163: INFO: Number of nodes with available pods: 0 Mar 8 17:07:24.163: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:07:25.171: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:25.174: INFO: Number of nodes with available pods: 0 Mar 8 17:07:25.174: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:07:26.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:26.170: INFO: Number of nodes with available pods: 0 Mar 8 17:07:26.170: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:07:27.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:27.170: INFO: Number of nodes with available pods: 2 Mar 8 17:07:27.170: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 8 17:07:27.230: INFO: Wrong image for pod: daemon-set-hf987. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:27.230: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:27.256: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:28.261: INFO: Wrong image for pod: daemon-set-hf987. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:28.261: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:28.265: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:29.259: INFO: Wrong image for pod: daemon-set-hf987. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:29.259: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:29.262: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:30.261: INFO: Wrong image for pod: daemon-set-hf987. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:30.261: INFO: Pod daemon-set-hf987 is not available Mar 8 17:07:30.261: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:30.265: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:31.260: INFO: Wrong image for pod: daemon-set-hf987. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:31.260: INFO: Pod daemon-set-hf987 is not available Mar 8 17:07:31.260: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:31.264: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:32.261: INFO: Wrong image for pod: daemon-set-hf987. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:32.261: INFO: Pod daemon-set-hf987 is not available Mar 8 17:07:32.261: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:32.265: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:33.260: INFO: Pod daemon-set-p926x is not available Mar 8 17:07:33.260: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:33.263: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:34.261: INFO: Pod daemon-set-p926x is not available Mar 8 17:07:34.261: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:34.265: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:35.260: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:35.264: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:36.260: INFO: Wrong image for pod: daemon-set-xhkmb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 17:07:36.260: INFO: Pod daemon-set-xhkmb is not available Mar 8 17:07:36.264: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:37.260: INFO: Pod daemon-set-mndl7 is not available Mar 8 17:07:37.264: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 8 17:07:37.267: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:37.270: INFO: Number of nodes with available pods: 1 Mar 8 17:07:37.270: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 17:07:38.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:38.276: INFO: Number of nodes with available pods: 1 Mar 8 17:07:38.276: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 17:07:39.276: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:07:39.279: INFO: Number of nodes with available pods: 2 Mar 8 17:07:39.279: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6214, will wait for the garbage collector to delete the pods Mar 8 17:07:39.352: INFO: Deleting DaemonSet.extensions daemon-set took: 5.649083ms Mar 8 17:07:39.653: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.177906ms Mar 8 17:07:52.556: INFO: Number of nodes with available pods: 0 Mar 8 17:07:52.556: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 17:07:52.558: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6214/daemonsets","resourceVersion":"44273"},"items":null} Mar 8 17:07:52.560: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6214/pods","resourceVersion":"44273"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:07:52.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6214" for this suite. • [SLOW TEST:28.510 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":22,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:07:52.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 8 17:07:52.607: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 17:07:52.632: INFO: Waiting for terminating namespaces to be deleted... Mar 8 17:07:52.656: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 17:07:52.661: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:07:52.661: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:07:52.661: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:07:52.661: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:07:52.661: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 17:07:52.678: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:07:52.678: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:07:52.678: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 17:07:52.678: INFO: Container coredns ready: true, restart count 0 Mar 8 17:07:52.678: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:07:52.678: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fa632f0c5710c1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:07:53.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4175" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":23,"skipped":428,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:07:53.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 8 17:07:53.783: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 17:07:53.792: INFO: Waiting for terminating namespaces to be deleted... Mar 8 17:07:53.794: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 17:07:53.799: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:07:53.800: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:07:53.800: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:07:53.800: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:07:53.800: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 17:07:53.805: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:07:53.805: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:07:53.805: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 17:07:53.805: INFO: Container coredns ready: true, restart count 0 Mar 8 17:07:53.805: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:07:53.805: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 8 17:07:59.925: INFO: Pod coredns-6955765f44-cgshp requesting resource cpu=100m on Node latest-worker2 Mar 8 17:07:59.925: INFO: Pod kindnet-2j5xm requesting resource cpu=100m on Node latest-worker Mar 8 17:07:59.925: INFO: Pod kindnet-spz5f requesting resource cpu=100m on Node latest-worker2 Mar 8 17:07:59.925: INFO: Pod kube-proxy-9jc24 requesting resource cpu=0m on Node latest-worker Mar 8 17:07:59.925: INFO: Pod kube-proxy-cx5xz requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 8 17:07:59.925: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 8 17:07:59.931: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9.15fa6330c1b98b04], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1117/filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9.15fa6330ed7a0fd0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9.15fa633176afe0e4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9.15fa6331883f2048], Reason = [Created], Message = [Created container filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9.15fa6331924c1db9], Reason = [Started], Message = [Started container filler-pod-1e3b11c7-ff4a-494a-bb32-66380ad62ad9] STEP: Considering event: Type = [Normal], Name = [filler-pod-354a2366-9014-494b-9098-adce97d93a45.15fa6330bc30207b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1117/filler-pod-354a2366-9014-494b-9098-adce97d93a45 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-354a2366-9014-494b-9098-adce97d93a45.15fa6330e700804e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-354a2366-9014-494b-9098-adce97d93a45.15fa6330f77f634a], Reason = [Created], Message = [Created container filler-pod-354a2366-9014-494b-9098-adce97d93a45] STEP: Considering event: Type = [Normal], Name = [filler-pod-354a2366-9014-494b-9098-adce97d93a45.15fa6331051cfbd0], Reason = [Started], Message = [Started container filler-pod-354a2366-9014-494b-9098-adce97d93a45] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa6332280a51f4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:08:07.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1117" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:13.424 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":24,"skipped":429,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:08:07.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-a965dd76-7454-46f8-bf53-53d3a0fa33fd STEP: Creating secret with name s-test-opt-upd-2bdfe467-f813-4d46-a5d0-47480fe4d515 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a965dd76-7454-46f8-bf53-53d3a0fa33fd STEP: Updating secret s-test-opt-upd-2bdfe467-f813-4d46-a5d0-47480fe4d515 STEP: Creating secret with name s-test-opt-create-ea11a32c-b783-4bd5-a27a-2f1683af62e7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:08:15.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9810" for this suite. • [SLOW TEST:8.240 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:08:15.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:08:19.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5596" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":481,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:08:19.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:08:19.521: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 8 17:08:19.530: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 8 17:08:24.535: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 17:08:24.535: INFO: Creating deployment "test-rolling-update-deployment" Mar 8 17:08:24.541: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 8 17:08:24.561: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 8 17:08:26.578: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 8 17:08:26.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284104, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284104, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284106, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284104, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 17:08:28.605: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 17:08:28.614: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4821 /apis/apps/v1/namespaces/deployment-4821/deployments/test-rolling-update-deployment 2b83a88e-8b5d-4ab5-83ea-adf63489a867 44588 1 2020-03-08 17:08:24 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000744df8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 17:08:24 +0000 UTC,LastTransitionTime:2020-03-08 17:08:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-08 17:08:26 +0000 UTC,LastTransitionTime:2020-03-08 17:08:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 17:08:28.617: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-4821 /apis/apps/v1/namespaces/deployment-4821/replicasets/test-rolling-update-deployment-664dd8fc7f a6fc8cd9-8860-4bd6-b475-73fbcdf1aaf2 44575 1 2020-03-08 17:08:24 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2b83a88e-8b5d-4ab5-83ea-adf63489a867 0xc0008264b7 0xc0008264b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000826558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:08:28.617: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 8 17:08:28.617: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4821 /apis/apps/v1/namespaces/deployment-4821/replicasets/test-rolling-update-controller eea41833-0cea-459d-92e3-c627cf117eb5 44587 2 2020-03-08 17:08:19 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2b83a88e-8b5d-4ab5-83ea-adf63489a867 0xc000826317 0xc000826318}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000826448 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:08:28.620: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-jm894" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-jm894 test-rolling-update-deployment-664dd8fc7f- deployment-4821 /api/v1/namespaces/deployment-4821/pods/test-rolling-update-deployment-664dd8fc7f-jm894 5e555c42-5eaf-48d4-9a3c-2148790e90f8 44574 0 2020-03-08 17:08:24 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f a6fc8cd9-8860-4bd6-b475-73fbcdf1aaf2 0xc00082bee7 0xc00082bee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tbm2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tbm2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tbm2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:08:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:08:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:08:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.138,StartTime:2020-03-08 17:08:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:08:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://8b48c4b3e9d4f311e4f4387bd93703652ac02e7bbbee72688e7e67139aa881f4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:08:28.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4821" for this suite. • [SLOW TEST:9.188 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":27,"skipped":489,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:08:28.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 17:08:28.769: INFO: Waiting up to 5m0s for pod "pod-d1a51ba9-8bf5-4932-88e1-8bbc1d0b4ce4" in namespace "emptydir-2389" to be "Succeeded or Failed" Mar 8 17:08:28.811: INFO: Pod "pod-d1a51ba9-8bf5-4932-88e1-8bbc1d0b4ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 41.890486ms Mar 8 17:08:30.814: INFO: Pod "pod-d1a51ba9-8bf5-4932-88e1-8bbc1d0b4ce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044817745s STEP: Saw pod success Mar 8 17:08:30.814: INFO: Pod "pod-d1a51ba9-8bf5-4932-88e1-8bbc1d0b4ce4" satisfied condition "Succeeded or Failed" Mar 8 17:08:30.816: INFO: Trying to get logs from node latest-worker pod pod-d1a51ba9-8bf5-4932-88e1-8bbc1d0b4ce4 container test-container: STEP: delete the pod Mar 8 17:08:31.167: INFO: Waiting for pod pod-d1a51ba9-8bf5-4932-88e1-8bbc1d0b4ce4 to disappear Mar 8 17:08:31.171: INFO: Pod pod-d1a51ba9-8bf5-4932-88e1-8bbc1d0b4ce4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:08:31.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2389" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":494,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:08:31.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 17:08:34.341: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:08:34.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9282" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":514,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:08:34.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 8 17:08:34.489: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3522 /api/v1/namespaces/watch-3522/configmaps/e2e-watch-test-label-changed e7a1c873-0c97-4868-b00f-c4ef332f9e8b 44674 0 2020-03-08 17:08:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:08:34.489: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3522 /api/v1/namespaces/watch-3522/configmaps/e2e-watch-test-label-changed e7a1c873-0c97-4868-b00f-c4ef332f9e8b 44675 0 2020-03-08 17:08:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:08:34.489: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3522 /api/v1/namespaces/watch-3522/configmaps/e2e-watch-test-label-changed e7a1c873-0c97-4868-b00f-c4ef332f9e8b 44676 0 2020-03-08 17:08:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 8 17:08:44.524: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3522 /api/v1/namespaces/watch-3522/configmaps/e2e-watch-test-label-changed e7a1c873-0c97-4868-b00f-c4ef332f9e8b 44724 0 2020-03-08 17:08:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:08:44.524: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3522 /api/v1/namespaces/watch-3522/configmaps/e2e-watch-test-label-changed e7a1c873-0c97-4868-b00f-c4ef332f9e8b 44725 0 2020-03-08 17:08:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:08:44.524: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3522 /api/v1/namespaces/watch-3522/configmaps/e2e-watch-test-label-changed e7a1c873-0c97-4868-b00f-c4ef332f9e8b 44726 0 2020-03-08 17:08:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:08:44.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3522" for this suite. • [SLOW TEST:10.153 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":30,"skipped":525,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:08:44.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 8 17:08:44.576: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 8 17:08:55.741: INFO: >>> kubeConfig: /root/.kube/config Mar 8 17:08:57.529: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:08.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9718" for this suite. • [SLOW TEST:24.260 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":31,"skipped":535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:08.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:10.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5183" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":32,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:10.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 8 17:09:11.084: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 8 17:09:11.092: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 8 17:09:11.092: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 8 17:09:11.104: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 8 17:09:11.104: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 8 17:09:11.222: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 8 17:09:11.222: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 8 17:09:18.320: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:18.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-47" for this suite. • [SLOW TEST:7.420 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":33,"skipped":577,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:18.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-05f208c2-0436-439e-a958-5796aa345a22 STEP: Creating a pod to test consume secrets Mar 8 17:09:18.503: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05" in namespace "projected-5828" to be "Succeeded or Failed" Mar 8 17:09:18.507: INFO: Pod "pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.639674ms Mar 8 17:09:20.511: INFO: Pod "pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007651784s Mar 8 17:09:22.515: INFO: Pod "pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011555642s STEP: Saw pod success Mar 8 17:09:22.515: INFO: Pod "pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05" satisfied condition "Succeeded or Failed" Mar 8 17:09:22.518: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05 container projected-secret-volume-test: STEP: delete the pod Mar 8 17:09:22.548: INFO: Waiting for pod pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05 to disappear Mar 8 17:09:22.555: INFO: Pod pod-projected-secrets-713a150d-3131-4e68-b028-1fdeb5e6cf05 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:22.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5828" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:22.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:09:22.650: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:24.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1641" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:24.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:09:24.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8583' Mar 8 17:09:25.278: INFO: stderr: "" Mar 8 17:09:25.278: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 8 17:09:25.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8583' Mar 8 17:09:25.561: INFO: stderr: "" Mar 8 17:09:25.562: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 17:09:26.567: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:09:26.567: INFO: Found 0 / 1 Mar 8 17:09:27.565: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:09:27.565: INFO: Found 1 / 1 Mar 8 17:09:27.565: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 17:09:27.569: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:09:27.569: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 17:09:27.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe pod agnhost-master-x2r4t --namespace=kubectl-8583' Mar 8 17:09:27.697: INFO: stderr: "" Mar 8 17:09:27.697: INFO: stdout: "Name: agnhost-master-x2r4t\nNamespace: kubectl-8583\nPriority: 0\nNode: latest-worker2/172.17.0.18\nStart Time: Sun, 08 Mar 2020 17:09:25 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.173\nIPs:\n IP: 10.244.2.173\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://3647c6ca5b2040d4ee6c0eb14d923ace61a1d07ecaa2f05082d8da53694c593e\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 08 Mar 2020 17:09:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4mffd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4mffd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4mffd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-8583/agnhost-master-x2r4t to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Mar 8 17:09:27.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8583' Mar 8 17:09:27.814: INFO: stderr: "" Mar 8 17:09:27.814: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8583\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-x2r4t\n" Mar 8 17:09:27.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8583' Mar 8 17:09:27.901: INFO: stderr: "" Mar 8 17:09:27.901: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8583\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.82.193\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.173:6379\nSession Affinity: None\nEvents: \n" Mar 8 17:09:27.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 8 17:09:28.009: INFO: stderr: "" Mar 8 17:09:28.009: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:49:22 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 08 Mar 2020 17:09:21 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 08 Mar 2020 17:05:21 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 08 Mar 2020 17:05:21 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 08 Mar 2020 17:05:21 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 08 Mar 2020 17:05:21 +0000 Sun, 08 Mar 2020 14:50:16 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.17\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: fb03af8223ea4430b6faaad8b31da5e5\n System UUID: 220fc748-c3b9-4de4-aa76-4a3520169f00\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-gxrvh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 139m\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 140m\n kube-system kindnet-gp8bt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 139m\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 140m\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 140m\n kube-system kube-proxy-nxxmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 139m\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 140m\n local-path-storage local-path-provisioner-7745554f7f-52xw4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 139m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 8 17:09:28.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe namespace kubectl-8583' Mar 8 17:09:28.092: INFO: stderr: "" Mar 8 17:09:28.092: INFO: stdout: "Name: kubectl-8583\nLabels: e2e-framework=kubectl\n e2e-run=77395332-c807-49dd-b13a-65a54fd507de\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:28.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8583" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":36,"skipped":652,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:28.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:09:28.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:09:30.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284168, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284168, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284168, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284168, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:09:33.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:09:33.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2336-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:34.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2538" for this suite. STEP: Destroying namespace "webhook-2538-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.885 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":37,"skipped":655,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:34.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:42.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2293" for this suite. • [SLOW TEST:7.106 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":38,"skipped":663,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:42.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 8 17:09:42.136: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:45.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6215" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":39,"skipped":676,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:45.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1878.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1878.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:09:49.836: INFO: DNS probes using dns-1878/dns-test-5fab748f-7dd7-4ca2-82b5-d8d569463e62 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:49.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1878" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":40,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:49.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:09:50.006: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9a7c687d-7d0d-4eb1-a1b0-075d60ddd0e3" in namespace "security-context-test-3038" to be "Succeeded or Failed" Mar 8 17:09:50.010: INFO: Pod "busybox-privileged-false-9a7c687d-7d0d-4eb1-a1b0-075d60ddd0e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.566062ms Mar 8 17:09:52.013: INFO: Pod "busybox-privileged-false-9a7c687d-7d0d-4eb1-a1b0-075d60ddd0e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007059068s Mar 8 17:09:52.013: INFO: Pod "busybox-privileged-false-9a7c687d-7d0d-4eb1-a1b0-075d60ddd0e3" satisfied condition "Succeeded or Failed" Mar 8 17:09:52.020: INFO: Got logs for pod "busybox-privileged-false-9a7c687d-7d0d-4eb1-a1b0-075d60ddd0e3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:52.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3038" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":712,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:52.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:09:52.908: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:09:54.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284192, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284192, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284193, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284192, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:09:57.970: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:09:57.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2060-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:09:59.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4748" for this suite. STEP: Destroying namespace "webhook-4748-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.392 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":42,"skipped":722,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:09:59.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 8 17:09:59.522: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 17:09:59.531: INFO: Waiting for terminating namespaces to be deleted... Mar 8 17:09:59.533: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 17:09:59.538: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:09:59.538: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:09:59.538: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:09:59.538: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:09:59.538: INFO: sample-webhook-deployment-6cc9cc9dc-tfjzs from webhook-4748 started at 2020-03-08 17:09:52 +0000 UTC (1 container statuses recorded) Mar 8 17:09:59.538: INFO: Container sample-webhook ready: true, restart count 0 Mar 8 17:09:59.538: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 17:09:59.541: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:09:59.541: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:09:59.541: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:09:59.541: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:09:59.541: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 17:09:59.541: INFO: Container coredns ready: true, restart count 0 Mar 8 17:09:59.541: INFO: pod-exec-websocket-cebadf49-0e4d-431d-9b53-dea51fac3de0 from pods-1641 started at 2020-03-08 17:09:22 +0000 UTC (1 container statuses recorded) Mar 8 17:09:59.541: INFO: Container main ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7d1d8f82-0523-4d43-b183-bae69fae3e06 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7d1d8f82-0523-4d43-b183-bae69fae3e06 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-7d1d8f82-0523-4d43-b183-bae69fae3e06 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:10:03.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6436" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":43,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:10:03.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:10:03.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7" in namespace "downward-api-6797" to be "Succeeded or Failed" Mar 8 17:10:03.735: INFO: Pod "downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326064ms Mar 8 17:10:05.738: INFO: Pod "downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014137286s Mar 8 17:10:07.742: INFO: Pod "downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01764182s STEP: Saw pod success Mar 8 17:10:07.742: INFO: Pod "downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7" satisfied condition "Succeeded or Failed" Mar 8 17:10:07.745: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7 container client-container: STEP: delete the pod Mar 8 17:10:07.781: INFO: Waiting for pod downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7 to disappear Mar 8 17:10:07.784: INFO: Pod downwardapi-volume-15d94c82-3362-40f7-a87c-6ac64d2abfa7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:10:07.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6797" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":787,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:10:07.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:10:07.857: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 17:10:10.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 create -f -' Mar 8 17:10:13.462: INFO: stderr: "" Mar 8 17:10:13.462: INFO: stdout: "e2e-test-crd-publish-openapi-2593-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 17:10:13.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 delete e2e-test-crd-publish-openapi-2593-crds test-cr' Mar 8 17:10:13.561: INFO: stderr: "" Mar 8 17:10:13.561: INFO: stdout: "e2e-test-crd-publish-openapi-2593-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 8 17:10:13.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 apply -f -' Mar 8 17:10:13.836: INFO: stderr: "" Mar 8 17:10:13.836: INFO: stdout: "e2e-test-crd-publish-openapi-2593-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 17:10:13.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 delete e2e-test-crd-publish-openapi-2593-crds test-cr' Mar 8 17:10:13.930: INFO: stderr: "" Mar 8 17:10:13.930: INFO: stdout: "e2e-test-crd-publish-openapi-2593-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 8 17:10:13.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2593-crds' Mar 8 17:10:14.148: INFO: stderr: "" Mar 8 17:10:14.148: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2593-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:10:16.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9836" for this suite. • [SLOW TEST:9.138 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":45,"skipped":792,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:10:16.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Mar 8 17:10:17.052: INFO: Waiting up to 5m0s for pod "var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9" in namespace "var-expansion-9054" to be "Succeeded or Failed" Mar 8 17:10:17.060: INFO: Pod "var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.452165ms Mar 8 17:10:19.063: INFO: Pod "var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01167783s Mar 8 17:10:21.067: INFO: Pod "var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01539776s STEP: Saw pod success Mar 8 17:10:21.067: INFO: Pod "var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9" satisfied condition "Succeeded or Failed" Mar 8 17:10:21.070: INFO: Trying to get logs from node latest-worker pod var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9 container dapi-container: STEP: delete the pod Mar 8 17:10:21.122: INFO: Waiting for pod var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9 to disappear Mar 8 17:10:21.137: INFO: Pod var-expansion-5d9a3330-b3ee-4344-80e2-66493cefb7a9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:10:21.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9054" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:10:21.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:10:21.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6117" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":47,"skipped":836,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:10:21.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4063.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4063.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4063.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4063.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 18.47.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.47.18_udp@PTR;check="$$(dig +tcp +noall +answer +search 18.47.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.47.18_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4063.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4063.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4063.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4063.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4063.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 18.47.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.47.18_udp@PTR;check="$$(dig +tcp +noall +answer +search 18.47.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.47.18_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:10:25.577: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:25.580: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:25.602: INFO: Unable to read jessie_udp@dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:25.608: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:25.611: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:25.627: INFO: Lookups using dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_udp@dns-test-service.dns-4063.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local] Mar 8 17:10:30.636: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:30.638: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:30.656: INFO: Unable to read jessie_udp@dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:30.660: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:30.662: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:30.674: INFO: Lookups using dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_udp@dns-test-service.dns-4063.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local] Mar 8 17:10:35.638: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:35.641: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:35.666: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:35.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:35.684: INFO: Lookups using dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local] Mar 8 17:10:40.637: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:40.640: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:40.661: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:40.671: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:40.692: INFO: Lookups using dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local] Mar 8 17:10:45.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:45.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:45.669: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:45.672: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:45.689: INFO: Lookups using dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local] Mar 8 17:10:50.637: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:50.639: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:50.665: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:50.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local from pod dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc: the server could not find the requested resource (get pods dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc) Mar 8 17:10:50.723: INFO: Lookups using dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4063.svc.cluster.local] Mar 8 17:10:55.692: INFO: DNS probes using dns-4063/dns-test-8754fa14-5ffa-4a7c-9e49-389099e76efc succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:10:56.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4063" for this suite. • [SLOW TEST:34.747 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":48,"skipped":843,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:10:56.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-g9qz STEP: Creating a pod to test atomic-volume-subpath Mar 8 17:10:56.181: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-g9qz" in namespace "subpath-6854" to be "Succeeded or Failed" Mar 8 17:10:56.210: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Pending", Reason="", readiness=false. Elapsed: 28.796235ms Mar 8 17:10:58.214: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 2.032679202s Mar 8 17:11:00.217: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 4.036331513s Mar 8 17:11:02.222: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 6.040439076s Mar 8 17:11:04.225: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 8.044071256s Mar 8 17:11:06.228: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 10.047041671s Mar 8 17:11:08.232: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 12.050408723s Mar 8 17:11:10.277: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 14.095654791s Mar 8 17:11:12.280: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 16.099117561s Mar 8 17:11:14.283: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 18.101749981s Mar 8 17:11:16.286: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Running", Reason="", readiness=true. Elapsed: 20.104523699s Mar 8 17:11:18.289: INFO: Pod "pod-subpath-test-projected-g9qz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.10742989s STEP: Saw pod success Mar 8 17:11:18.289: INFO: Pod "pod-subpath-test-projected-g9qz" satisfied condition "Succeeded or Failed" Mar 8 17:11:18.291: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-g9qz container test-container-subpath-projected-g9qz: STEP: delete the pod Mar 8 17:11:18.341: INFO: Waiting for pod pod-subpath-test-projected-g9qz to disappear Mar 8 17:11:18.342: INFO: Pod pod-subpath-test-projected-g9qz no longer exists STEP: Deleting pod pod-subpath-test-projected-g9qz Mar 8 17:11:18.342: INFO: Deleting pod "pod-subpath-test-projected-g9qz" in namespace "subpath-6854" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:11:18.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6854" for this suite. • [SLOW TEST:22.327 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":49,"skipped":853,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:11:18.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 8 17:11:18.389: INFO: PodSpec: initContainers in spec.initContainers Mar 8 17:12:06.194: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dfe32b2b-1890-4e1c-bd7f-4f3f51ddbfd2", GenerateName:"", Namespace:"init-container-7048", SelfLink:"/api/v1/namespaces/init-container-7048/pods/pod-init-dfe32b2b-1890-4e1c-bd7f-4f3f51ddbfd2", UID:"0f130da9-9344-4fd4-8740-1d790685bb6f", ResourceVersion:"46045", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719284278, loc:(*time.Location)(0x7fda4c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"389313822"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xqx8c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005f95040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xqx8c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xqx8c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xqx8c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00563dcd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c711f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00563dd70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00563ddb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00563ddb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00563ddbc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284278, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284278, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284278, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284278, loc:(*time.Location)(0x7fda4c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.16", PodIP:"10.244.1.156", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.156"}}, StartTime:(*v1.Time)(0xc0030d3d80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c712d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c71340)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://4ceefe4ad3f4ce8695a38048ddce2f92d1ede5eebe9e2a64d00b8261c61d2614", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030d3de0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030d3da0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00563de4f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:12:06.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7048" for this suite. • [SLOW TEST:47.921 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":50,"skipped":891,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:12:06.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-69506f61-1e38-41f9-af53-b0faa579dd64 in namespace container-probe-9719 Mar 8 17:12:08.335: INFO: Started pod liveness-69506f61-1e38-41f9-af53-b0faa579dd64 in namespace container-probe-9719 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 17:12:08.337: INFO: Initial restart count of pod liveness-69506f61-1e38-41f9-af53-b0faa579dd64 is 0 Mar 8 17:12:26.371: INFO: Restart count of pod container-probe-9719/liveness-69506f61-1e38-41f9-af53-b0faa579dd64 is now 1 (18.033748273s elapsed) Mar 8 17:12:46.456: INFO: Restart count of pod container-probe-9719/liveness-69506f61-1e38-41f9-af53-b0faa579dd64 is now 2 (38.118115427s elapsed) Mar 8 17:13:06.499: INFO: Restart count of pod container-probe-9719/liveness-69506f61-1e38-41f9-af53-b0faa579dd64 is now 3 (58.161351738s elapsed) Mar 8 17:13:24.533: INFO: Restart count of pod container-probe-9719/liveness-69506f61-1e38-41f9-af53-b0faa579dd64 is now 4 (1m16.195837569s elapsed) Mar 8 17:14:38.748: INFO: Restart count of pod container-probe-9719/liveness-69506f61-1e38-41f9-af53-b0faa579dd64 is now 5 (2m30.410970107s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:14:38.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9719" for this suite. • [SLOW TEST:152.491 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":912,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:14:38.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-3d9a9233-b120-4639-9d7c-466d14cfe7e2 STEP: Creating a pod to test consume configMaps Mar 8 17:14:38.861: INFO: Waiting up to 5m0s for pod "pod-configmaps-04ad4fee-a589-4173-857d-b084391832d8" in namespace "configmap-7237" to be "Succeeded or Failed" Mar 8 17:14:38.886: INFO: Pod "pod-configmaps-04ad4fee-a589-4173-857d-b084391832d8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.668539ms Mar 8 17:14:40.890: INFO: Pod "pod-configmaps-04ad4fee-a589-4173-857d-b084391832d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028879374s STEP: Saw pod success Mar 8 17:14:40.890: INFO: Pod "pod-configmaps-04ad4fee-a589-4173-857d-b084391832d8" satisfied condition "Succeeded or Failed" Mar 8 17:14:40.893: INFO: Trying to get logs from node latest-worker pod pod-configmaps-04ad4fee-a589-4173-857d-b084391832d8 container configmap-volume-test: STEP: delete the pod Mar 8 17:14:40.922: INFO: Waiting for pod pod-configmaps-04ad4fee-a589-4173-857d-b084391832d8 to disappear Mar 8 17:14:40.954: INFO: Pod pod-configmaps-04ad4fee-a589-4173-857d-b084391832d8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:14:40.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7237" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:14:40.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-2vjd STEP: Creating a pod to test atomic-volume-subpath Mar 8 17:14:41.109: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2vjd" in namespace "subpath-2503" to be "Succeeded or Failed" Mar 8 17:14:41.119: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.928363ms Mar 8 17:14:43.122: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013461857s Mar 8 17:14:45.126: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 4.01715944s Mar 8 17:14:47.130: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 6.021252951s Mar 8 17:14:49.133: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 8.024384953s Mar 8 17:14:51.136: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 10.027552032s Mar 8 17:14:53.140: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 12.030782164s Mar 8 17:14:55.144: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 14.034782326s Mar 8 17:14:57.147: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 16.038346027s Mar 8 17:14:59.151: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 18.042360568s Mar 8 17:15:01.155: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Running", Reason="", readiness=true. Elapsed: 20.046623509s Mar 8 17:15:03.160: INFO: Pod "pod-subpath-test-secret-2vjd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.050756102s STEP: Saw pod success Mar 8 17:15:03.160: INFO: Pod "pod-subpath-test-secret-2vjd" satisfied condition "Succeeded or Failed" Mar 8 17:15:03.163: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-2vjd container test-container-subpath-secret-2vjd: STEP: delete the pod Mar 8 17:15:03.194: INFO: Waiting for pod pod-subpath-test-secret-2vjd to disappear Mar 8 17:15:03.202: INFO: Pod pod-subpath-test-secret-2vjd no longer exists STEP: Deleting pod pod-subpath-test-secret-2vjd Mar 8 17:15:03.202: INFO: Deleting pod "pod-subpath-test-secret-2vjd" in namespace "subpath-2503" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:15:03.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2503" for this suite. • [SLOW TEST:22.250 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":53,"skipped":1000,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:15:03.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Mar 8 17:15:03.294: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6825" to be "Succeeded or Failed" Mar 8 17:15:03.299: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.945753ms Mar 8 17:15:05.302: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008332323s Mar 8 17:15:07.306: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012326618s STEP: Saw pod success Mar 8 17:15:07.306: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 8 17:15:07.309: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 8 17:15:07.362: INFO: Waiting for pod pod-host-path-test to disappear Mar 8 17:15:07.377: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:15:07.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6825" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":1030,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:15:07.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6987 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6987 STEP: creating replication controller externalsvc in namespace services-6987 I0308 17:15:07.628761 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6987, replica count: 2 I0308 17:15:10.679257 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 8 17:15:10.702: INFO: Creating new exec pod Mar 8 17:15:12.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6987 execpod6m5ht -- /bin/sh -x -c nslookup nodeport-service' Mar 8 17:15:12.970: INFO: stderr: "I0308 17:15:12.892388 470 log.go:172] (0xc00003abb0) (0xc0005c7360) Create stream\nI0308 17:15:12.892432 470 log.go:172] (0xc00003abb0) (0xc0005c7360) Stream added, broadcasting: 1\nI0308 17:15:12.894383 470 log.go:172] (0xc00003abb0) Reply frame received for 1\nI0308 17:15:12.894416 470 log.go:172] (0xc00003abb0) (0xc000928000) Create stream\nI0308 17:15:12.894424 470 log.go:172] (0xc00003abb0) (0xc000928000) Stream added, broadcasting: 3\nI0308 17:15:12.895132 470 log.go:172] (0xc00003abb0) Reply frame received for 3\nI0308 17:15:12.895164 470 log.go:172] (0xc00003abb0) (0xc0009280a0) Create stream\nI0308 17:15:12.895175 470 log.go:172] (0xc00003abb0) (0xc0009280a0) Stream added, broadcasting: 5\nI0308 17:15:12.895836 470 log.go:172] (0xc00003abb0) Reply frame received for 5\nI0308 17:15:12.956703 470 log.go:172] (0xc00003abb0) Data frame received for 5\nI0308 17:15:12.956727 470 log.go:172] (0xc0009280a0) (5) Data frame handling\nI0308 17:15:12.956742 470 log.go:172] (0xc0009280a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0308 17:15:12.963608 470 log.go:172] (0xc00003abb0) Data frame received for 3\nI0308 17:15:12.963630 470 log.go:172] (0xc000928000) (3) Data frame handling\nI0308 17:15:12.963642 470 log.go:172] (0xc000928000) (3) Data frame sent\nI0308 17:15:12.964945 470 log.go:172] (0xc00003abb0) Data frame received for 3\nI0308 17:15:12.964973 470 log.go:172] (0xc000928000) (3) Data frame handling\nI0308 17:15:12.964986 470 log.go:172] (0xc000928000) (3) Data frame sent\nI0308 17:15:12.966275 470 log.go:172] (0xc00003abb0) Data frame received for 5\nI0308 17:15:12.966296 470 log.go:172] (0xc0009280a0) (5) Data frame handling\nI0308 17:15:12.966311 470 log.go:172] (0xc00003abb0) Data frame received for 3\nI0308 17:15:12.966316 470 log.go:172] (0xc000928000) (3) Data frame handling\nI0308 17:15:12.966840 470 log.go:172] (0xc00003abb0) Data frame received for 1\nI0308 17:15:12.966855 470 log.go:172] (0xc0005c7360) (1) Data frame handling\nI0308 17:15:12.966863 470 log.go:172] (0xc0005c7360) (1) Data frame sent\nI0308 17:15:12.966873 470 log.go:172] (0xc00003abb0) (0xc0005c7360) Stream removed, broadcasting: 1\nI0308 17:15:12.966885 470 log.go:172] (0xc00003abb0) Go away received\nI0308 17:15:12.967148 470 log.go:172] (0xc00003abb0) (0xc0005c7360) Stream removed, broadcasting: 1\nI0308 17:15:12.967163 470 log.go:172] (0xc00003abb0) (0xc000928000) Stream removed, broadcasting: 3\nI0308 17:15:12.967170 470 log.go:172] (0xc00003abb0) (0xc0009280a0) Stream removed, broadcasting: 5\n" Mar 8 17:15:12.970: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6987.svc.cluster.local\tcanonical name = externalsvc.services-6987.svc.cluster.local.\nName:\texternalsvc.services-6987.svc.cluster.local\nAddress: 10.96.50.106\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6987, will wait for the garbage collector to delete the pods Mar 8 17:15:13.028: INFO: Deleting ReplicationController externalsvc took: 5.085322ms Mar 8 17:15:13.328: INFO: Terminating ReplicationController externalsvc pods took: 300.217948ms Mar 8 17:15:22.557: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:15:22.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6987" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:15.200 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":55,"skipped":1030,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:15:22.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 8 17:15:22.658: INFO: >>> kubeConfig: /root/.kube/config Mar 8 17:15:24.492: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:15:32.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5747" for this suite. • [SLOW TEST:10.004 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":56,"skipped":1030,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:15:32.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-6c60eaa4-4f84-401d-9de6-c75e287a9dfc [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:15:32.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1972" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":57,"skipped":1038,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:15:32.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 8 17:15:32.720: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:15:48.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5217" for this suite. • [SLOW TEST:15.455 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":58,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:15:48.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7054 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7054 I0308 17:15:48.505681 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7054, replica count: 2 I0308 17:15:51.556102 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 17:15:51.556: INFO: Creating new exec pod Mar 8 17:15:56.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7054 execpod8ztgg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 17:15:57.009: INFO: stderr: "I0308 17:15:56.923678 492 log.go:172] (0xc000b35810) (0xc000932960) Create stream\nI0308 17:15:56.923734 492 log.go:172] (0xc000b35810) (0xc000932960) Stream added, broadcasting: 1\nI0308 17:15:56.928618 492 log.go:172] (0xc000b35810) Reply frame received for 1\nI0308 17:15:56.928667 492 log.go:172] (0xc000b35810) (0xc0007bf5e0) Create stream\nI0308 17:15:56.928683 492 log.go:172] (0xc000b35810) (0xc0007bf5e0) Stream added, broadcasting: 3\nI0308 17:15:56.930162 492 log.go:172] (0xc000b35810) Reply frame received for 3\nI0308 17:15:56.930212 492 log.go:172] (0xc000b35810) (0xc0005e6a00) Create stream\nI0308 17:15:56.930237 492 log.go:172] (0xc000b35810) (0xc0005e6a00) Stream added, broadcasting: 5\nI0308 17:15:56.933707 492 log.go:172] (0xc000b35810) Reply frame received for 5\nI0308 17:15:57.001650 492 log.go:172] (0xc000b35810) Data frame received for 5\nI0308 17:15:57.001674 492 log.go:172] (0xc0005e6a00) (5) Data frame handling\nI0308 17:15:57.001692 492 log.go:172] (0xc0005e6a00) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 17:15:57.003136 492 log.go:172] (0xc000b35810) Data frame received for 5\nI0308 17:15:57.003172 492 log.go:172] (0xc0005e6a00) (5) Data frame handling\nI0308 17:15:57.003196 492 log.go:172] (0xc0005e6a00) (5) Data frame sent\nI0308 17:15:57.003215 492 log.go:172] (0xc000b35810) Data frame received for 5\nI0308 17:15:57.003230 492 log.go:172] (0xc0005e6a00) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 17:15:57.003248 492 log.go:172] (0xc000b35810) Data frame received for 3\nI0308 17:15:57.003259 492 log.go:172] (0xc0007bf5e0) (3) Data frame handling\nI0308 17:15:57.004817 492 log.go:172] (0xc000b35810) Data frame received for 1\nI0308 17:15:57.004834 492 log.go:172] (0xc000932960) (1) Data frame handling\nI0308 17:15:57.004848 492 log.go:172] (0xc000932960) (1) Data frame sent\nI0308 17:15:57.004861 492 log.go:172] (0xc000b35810) (0xc000932960) Stream removed, broadcasting: 1\nI0308 17:15:57.004872 492 log.go:172] (0xc000b35810) Go away received\nI0308 17:15:57.005318 492 log.go:172] (0xc000b35810) (0xc000932960) Stream removed, broadcasting: 1\nI0308 17:15:57.005337 492 log.go:172] (0xc000b35810) (0xc0007bf5e0) Stream removed, broadcasting: 3\nI0308 17:15:57.005345 492 log.go:172] (0xc000b35810) (0xc0005e6a00) Stream removed, broadcasting: 5\n" Mar 8 17:15:57.009: INFO: stdout: "" Mar 8 17:15:57.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7054 execpod8ztgg -- /bin/sh -x -c nc -zv -t -w 2 10.96.38.243 80' Mar 8 17:15:57.205: INFO: stderr: "I0308 17:15:57.136184 514 log.go:172] (0xc0009ae840) (0xc0009843c0) Create stream\nI0308 17:15:57.136235 514 log.go:172] (0xc0009ae840) (0xc0009843c0) Stream added, broadcasting: 1\nI0308 17:15:57.139389 514 log.go:172] (0xc0009ae840) Reply frame received for 1\nI0308 17:15:57.139478 514 log.go:172] (0xc0009ae840) (0xc000aca1e0) Create stream\nI0308 17:15:57.139517 514 log.go:172] (0xc0009ae840) (0xc000aca1e0) Stream added, broadcasting: 3\nI0308 17:15:57.141390 514 log.go:172] (0xc0009ae840) Reply frame received for 3\nI0308 17:15:57.141417 514 log.go:172] (0xc0009ae840) (0xc00061b680) Create stream\nI0308 17:15:57.141425 514 log.go:172] (0xc0009ae840) (0xc00061b680) Stream added, broadcasting: 5\nI0308 17:15:57.142170 514 log.go:172] (0xc0009ae840) Reply frame received for 5\nI0308 17:15:57.200297 514 log.go:172] (0xc0009ae840) Data frame received for 3\nI0308 17:15:57.200324 514 log.go:172] (0xc000aca1e0) (3) Data frame handling\nI0308 17:15:57.200341 514 log.go:172] (0xc0009ae840) Data frame received for 5\nI0308 17:15:57.200347 514 log.go:172] (0xc00061b680) (5) Data frame handling\nI0308 17:15:57.200355 514 log.go:172] (0xc00061b680) (5) Data frame sent\nI0308 17:15:57.200361 514 log.go:172] (0xc0009ae840) Data frame received for 5\nI0308 17:15:57.200367 514 log.go:172] (0xc00061b680) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.38.243 80\nConnection to 10.96.38.243 80 port [tcp/http] succeeded!\nI0308 17:15:57.201371 514 log.go:172] (0xc0009ae840) Data frame received for 1\nI0308 17:15:57.201387 514 log.go:172] (0xc0009843c0) (1) Data frame handling\nI0308 17:15:57.201394 514 log.go:172] (0xc0009843c0) (1) Data frame sent\nI0308 17:15:57.201404 514 log.go:172] (0xc0009ae840) (0xc0009843c0) Stream removed, broadcasting: 1\nI0308 17:15:57.201420 514 log.go:172] (0xc0009ae840) Go away received\nI0308 17:15:57.201736 514 log.go:172] (0xc0009ae840) (0xc0009843c0) Stream removed, broadcasting: 1\nI0308 17:15:57.201761 514 log.go:172] (0xc0009ae840) (0xc000aca1e0) Stream removed, broadcasting: 3\nI0308 17:15:57.201772 514 log.go:172] (0xc0009ae840) (0xc00061b680) Stream removed, broadcasting: 5\n" Mar 8 17:15:57.205: INFO: stdout: "" Mar 8 17:15:57.205: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:15:57.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7054" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.164 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":59,"skipped":1083,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:15:57.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 17:15:57.381: INFO: Waiting up to 5m0s for pod "pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb" in namespace "emptydir-6777" to be "Succeeded or Failed" Mar 8 17:15:57.385: INFO: Pod "pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183082ms Mar 8 17:15:59.389: INFO: Pod "pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008355202s Mar 8 17:16:01.392: INFO: Pod "pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011801666s STEP: Saw pod success Mar 8 17:16:01.393: INFO: Pod "pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb" satisfied condition "Succeeded or Failed" Mar 8 17:16:01.396: INFO: Trying to get logs from node latest-worker pod pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb container test-container: STEP: delete the pod Mar 8 17:16:01.416: INFO: Waiting for pod pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb to disappear Mar 8 17:16:01.420: INFO: Pod pod-e67f5303-448a-4fc3-9e17-95d01b3ebefb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:16:01.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6777" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":1088,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:16:01.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-5cedc398-9a04-4a5a-8113-541c4abd914c STEP: Creating a pod to test consume secrets Mar 8 17:16:01.607: INFO: Waiting up to 5m0s for pod "pod-secrets-20a2cafd-5519-4b9a-a22b-1c66eb186976" in namespace "secrets-3516" to be "Succeeded or Failed" Mar 8 17:16:01.623: INFO: Pod "pod-secrets-20a2cafd-5519-4b9a-a22b-1c66eb186976": Phase="Pending", Reason="", readiness=false. Elapsed: 15.804672ms Mar 8 17:16:03.641: INFO: Pod "pod-secrets-20a2cafd-5519-4b9a-a22b-1c66eb186976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033811109s STEP: Saw pod success Mar 8 17:16:03.641: INFO: Pod "pod-secrets-20a2cafd-5519-4b9a-a22b-1c66eb186976" satisfied condition "Succeeded or Failed" Mar 8 17:16:03.644: INFO: Trying to get logs from node latest-worker pod pod-secrets-20a2cafd-5519-4b9a-a22b-1c66eb186976 container secret-volume-test: STEP: delete the pod Mar 8 17:16:03.660: INFO: Waiting for pod pod-secrets-20a2cafd-5519-4b9a-a22b-1c66eb186976 to disappear Mar 8 17:16:03.665: INFO: Pod pod-secrets-20a2cafd-5519-4b9a-a22b-1c66eb186976 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:16:03.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3516" for this suite. STEP: Destroying namespace "secret-namespace-2067" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1097,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:16:03.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-65ba5902-77dc-47dc-8f7c-508a397bee26 in namespace container-probe-6906 Mar 8 17:16:05.795: INFO: Started pod test-webserver-65ba5902-77dc-47dc-8f7c-508a397bee26 in namespace container-probe-6906 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 17:16:05.797: INFO: Initial restart count of pod test-webserver-65ba5902-77dc-47dc-8f7c-508a397bee26 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:20:06.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6906" for this suite. • [SLOW TEST:242.742 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:20:06.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:20:06.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99db1618-f093-4c03-80af-1dea7c36bce6" in namespace "projected-1562" to be "Succeeded or Failed" Mar 8 17:20:06.492: INFO: Pod "downwardapi-volume-99db1618-f093-4c03-80af-1dea7c36bce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.004604ms Mar 8 17:20:08.496: INFO: Pod "downwardapi-volume-99db1618-f093-4c03-80af-1dea7c36bce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009854292s STEP: Saw pod success Mar 8 17:20:08.496: INFO: Pod "downwardapi-volume-99db1618-f093-4c03-80af-1dea7c36bce6" satisfied condition "Succeeded or Failed" Mar 8 17:20:08.499: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-99db1618-f093-4c03-80af-1dea7c36bce6 container client-container: STEP: delete the pod Mar 8 17:20:08.552: INFO: Waiting for pod downwardapi-volume-99db1618-f093-4c03-80af-1dea7c36bce6 to disappear Mar 8 17:20:08.553: INFO: Pod downwardapi-volume-99db1618-f093-4c03-80af-1dea7c36bce6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:20:08.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1562" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:20:08.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-qtl9 STEP: Creating a pod to test atomic-volume-subpath Mar 8 17:20:08.630: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qtl9" in namespace "subpath-3130" to be "Succeeded or Failed" Mar 8 17:20:08.671: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Pending", Reason="", readiness=false. Elapsed: 40.763088ms Mar 8 17:20:10.675: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 2.044470898s Mar 8 17:20:12.678: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 4.047787668s Mar 8 17:20:14.682: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 6.051089434s Mar 8 17:20:16.804: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 8.173659647s Mar 8 17:20:18.808: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 10.177897486s Mar 8 17:20:20.813: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 12.182104263s Mar 8 17:20:22.816: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 14.185822943s Mar 8 17:20:24.821: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 16.19007712s Mar 8 17:20:26.825: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 18.194213242s Mar 8 17:20:28.829: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Running", Reason="", readiness=true. Elapsed: 20.198467179s Mar 8 17:20:30.833: INFO: Pod "pod-subpath-test-configmap-qtl9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.202950493s STEP: Saw pod success Mar 8 17:20:30.833: INFO: Pod "pod-subpath-test-configmap-qtl9" satisfied condition "Succeeded or Failed" Mar 8 17:20:30.837: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-qtl9 container test-container-subpath-configmap-qtl9: STEP: delete the pod Mar 8 17:20:30.855: INFO: Waiting for pod pod-subpath-test-configmap-qtl9 to disappear Mar 8 17:20:30.859: INFO: Pod pod-subpath-test-configmap-qtl9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-qtl9 Mar 8 17:20:30.859: INFO: Deleting pod "pod-subpath-test-configmap-qtl9" in namespace "subpath-3130" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:20:30.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3130" for this suite. • [SLOW TEST:22.310 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":64,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:20:30.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 17:20:30.976: INFO: Waiting up to 5m0s for pod "pod-6688bb40-5b00-48c7-833a-ce1d175d7704" in namespace "emptydir-3270" to be "Succeeded or Failed" Mar 8 17:20:30.979: INFO: Pod "pod-6688bb40-5b00-48c7-833a-ce1d175d7704": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598169ms Mar 8 17:20:32.995: INFO: Pod "pod-6688bb40-5b00-48c7-833a-ce1d175d7704": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019471952s Mar 8 17:20:35.115: INFO: Pod "pod-6688bb40-5b00-48c7-833a-ce1d175d7704": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139601732s STEP: Saw pod success Mar 8 17:20:35.115: INFO: Pod "pod-6688bb40-5b00-48c7-833a-ce1d175d7704" satisfied condition "Succeeded or Failed" Mar 8 17:20:35.118: INFO: Trying to get logs from node latest-worker pod pod-6688bb40-5b00-48c7-833a-ce1d175d7704 container test-container: STEP: delete the pod Mar 8 17:20:35.255: INFO: Waiting for pod pod-6688bb40-5b00-48c7-833a-ce1d175d7704 to disappear Mar 8 17:20:35.264: INFO: Pod pod-6688bb40-5b00-48c7-833a-ce1d175d7704 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:20:35.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3270" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1201,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:20:35.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 17:20:35.334: INFO: Waiting up to 5m0s for pod "pod-359cd1ce-6bad-4904-9e61-b796d6b45631" in namespace "emptydir-2216" to be "Succeeded or Failed" Mar 8 17:20:35.336: INFO: Pod "pod-359cd1ce-6bad-4904-9e61-b796d6b45631": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214935ms Mar 8 17:20:37.340: INFO: Pod "pod-359cd1ce-6bad-4904-9e61-b796d6b45631": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005883159s STEP: Saw pod success Mar 8 17:20:37.340: INFO: Pod "pod-359cd1ce-6bad-4904-9e61-b796d6b45631" satisfied condition "Succeeded or Failed" Mar 8 17:20:37.342: INFO: Trying to get logs from node latest-worker pod pod-359cd1ce-6bad-4904-9e61-b796d6b45631 container test-container: STEP: delete the pod Mar 8 17:20:37.362: INFO: Waiting for pod pod-359cd1ce-6bad-4904-9e61-b796d6b45631 to disappear Mar 8 17:20:37.402: INFO: Pod pod-359cd1ce-6bad-4904-9e61-b796d6b45631 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:20:37.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2216" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1206,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:20:37.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-3aacd4e0-d96e-4096-b5b9-75219856949b STEP: Creating a pod to test consume configMaps Mar 8 17:20:37.519: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9aec25ec-3de7-450c-b169-312e5baf43cf" in namespace "projected-8526" to be "Succeeded or Failed" Mar 8 17:20:37.528: INFO: Pod "pod-projected-configmaps-9aec25ec-3de7-450c-b169-312e5baf43cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490944ms Mar 8 17:20:39.532: INFO: Pod "pod-projected-configmaps-9aec25ec-3de7-450c-b169-312e5baf43cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01273379s STEP: Saw pod success Mar 8 17:20:39.532: INFO: Pod "pod-projected-configmaps-9aec25ec-3de7-450c-b169-312e5baf43cf" satisfied condition "Succeeded or Failed" Mar 8 17:20:39.535: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-9aec25ec-3de7-450c-b169-312e5baf43cf container projected-configmap-volume-test: STEP: delete the pod Mar 8 17:20:39.559: INFO: Waiting for pod pod-projected-configmaps-9aec25ec-3de7-450c-b169-312e5baf43cf to disappear Mar 8 17:20:39.563: INFO: Pod pod-projected-configmaps-9aec25ec-3de7-450c-b169-312e5baf43cf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:20:39.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8526" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1208,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:20:39.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:20:40.270: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:20:42.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284840, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284840, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284840, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719284840, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:20:45.302: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:20:57.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3967" for this suite. STEP: Destroying namespace "webhook-3967-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.999 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":68,"skipped":1213,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:20:57.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 8 17:20:57.607: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 8 17:21:04.742: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:21:04.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2281" for this suite. • [SLOW TEST:7.182 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1224,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:21:04.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:21:04.853: INFO: Waiting up to 5m0s for pod "downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5" in namespace "projected-5794" to be "Succeeded or Failed" Mar 8 17:21:04.866: INFO: Pod "downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.686843ms Mar 8 17:21:06.870: INFO: Pod "downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016982112s Mar 8 17:21:08.874: INFO: Pod "downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020208475s STEP: Saw pod success Mar 8 17:21:08.874: INFO: Pod "downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5" satisfied condition "Succeeded or Failed" Mar 8 17:21:08.876: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5 container client-container: STEP: delete the pod Mar 8 17:21:08.891: INFO: Waiting for pod downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5 to disappear Mar 8 17:21:08.926: INFO: Pod downwardapi-volume-300dc8c2-156f-4ef9-91c4-414f1156d4f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:21:08.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5794" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:21:08.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 17:21:09.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5359' Mar 8 17:21:11.737: INFO: stderr: "" Mar 8 17:21:11.737: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Mar 8 17:21:11.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5359' Mar 8 17:21:22.473: INFO: stderr: "" Mar 8 17:21:22.473: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:21:22.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5359" for this suite. • [SLOW TEST:13.492 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":71,"skipped":1260,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:21:22.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-363 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-363 STEP: creating replication controller externalsvc in namespace services-363 I0308 17:21:22.614150 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-363, replica count: 2 I0308 17:21:25.664639 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 8 17:21:25.696: INFO: Creating new exec pod Mar 8 17:21:27.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-363 execpodqrmwv -- /bin/sh -x -c nslookup clusterip-service' Mar 8 17:21:27.950: INFO: stderr: "I0308 17:21:27.881126 583 log.go:172] (0xc0009344d0) (0xc000402b40) Create stream\nI0308 17:21:27.881173 583 log.go:172] (0xc0009344d0) (0xc000402b40) Stream added, broadcasting: 1\nI0308 17:21:27.883258 583 log.go:172] (0xc0009344d0) Reply frame received for 1\nI0308 17:21:27.883289 583 log.go:172] (0xc0009344d0) (0xc00099c000) Create stream\nI0308 17:21:27.883301 583 log.go:172] (0xc0009344d0) (0xc00099c000) Stream added, broadcasting: 3\nI0308 17:21:27.884001 583 log.go:172] (0xc0009344d0) Reply frame received for 3\nI0308 17:21:27.884028 583 log.go:172] (0xc0009344d0) (0xc00094c000) Create stream\nI0308 17:21:27.884036 583 log.go:172] (0xc0009344d0) (0xc00094c000) Stream added, broadcasting: 5\nI0308 17:21:27.885306 583 log.go:172] (0xc0009344d0) Reply frame received for 5\nI0308 17:21:27.937045 583 log.go:172] (0xc0009344d0) Data frame received for 5\nI0308 17:21:27.937064 583 log.go:172] (0xc00094c000) (5) Data frame handling\nI0308 17:21:27.937079 583 log.go:172] (0xc00094c000) (5) Data frame sent\n+ nslookup clusterip-service\nI0308 17:21:27.943458 583 log.go:172] (0xc0009344d0) Data frame received for 3\nI0308 17:21:27.943481 583 log.go:172] (0xc00099c000) (3) Data frame handling\nI0308 17:21:27.943504 583 log.go:172] (0xc00099c000) (3) Data frame sent\nI0308 17:21:27.944275 583 log.go:172] (0xc0009344d0) Data frame received for 3\nI0308 17:21:27.944296 583 log.go:172] (0xc00099c000) (3) Data frame handling\nI0308 17:21:27.944327 583 log.go:172] (0xc00099c000) (3) Data frame sent\nI0308 17:21:27.944721 583 log.go:172] (0xc0009344d0) Data frame received for 5\nI0308 17:21:27.944772 583 log.go:172] (0xc00094c000) (5) Data frame handling\nI0308 17:21:27.944869 583 log.go:172] (0xc0009344d0) Data frame received for 3\nI0308 17:21:27.944889 583 log.go:172] (0xc00099c000) (3) Data frame handling\nI0308 17:21:27.946177 583 log.go:172] (0xc0009344d0) Data frame received for 1\nI0308 17:21:27.946199 583 log.go:172] (0xc000402b40) (1) Data frame handling\nI0308 17:21:27.946219 583 log.go:172] (0xc000402b40) (1) Data frame sent\nI0308 17:21:27.946238 583 log.go:172] (0xc0009344d0) (0xc000402b40) Stream removed, broadcasting: 1\nI0308 17:21:27.946260 583 log.go:172] (0xc0009344d0) Go away received\nI0308 17:21:27.946636 583 log.go:172] (0xc0009344d0) (0xc000402b40) Stream removed, broadcasting: 1\nI0308 17:21:27.946666 583 log.go:172] (0xc0009344d0) (0xc00099c000) Stream removed, broadcasting: 3\nI0308 17:21:27.946674 583 log.go:172] (0xc0009344d0) (0xc00094c000) Stream removed, broadcasting: 5\n" Mar 8 17:21:27.950: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-363.svc.cluster.local\tcanonical name = externalsvc.services-363.svc.cluster.local.\nName:\texternalsvc.services-363.svc.cluster.local\nAddress: 10.96.174.152\n\n" STEP: deleting ReplicationController externalsvc in namespace services-363, will wait for the garbage collector to delete the pods Mar 8 17:21:28.009: INFO: Deleting ReplicationController externalsvc took: 5.642687ms Mar 8 17:21:28.309: INFO: Terminating ReplicationController externalsvc pods took: 300.217721ms Mar 8 17:21:42.537: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:21:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-363" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:20.088 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":72,"skipped":1265,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:21:42.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-q2q46 in namespace proxy-1659 I0308 17:21:42.697634 7 runners.go:190] Created replication controller with name: proxy-service-q2q46, namespace: proxy-1659, replica count: 1 I0308 17:21:43.747988 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 17:21:44.748189 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:45.748367 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:46.748548 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:47.748722 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:48.748914 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:49.749119 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:50.749362 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:51.749583 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 17:21:52.749784 7 runners.go:190] proxy-service-q2q46 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 17:21:52.752: INFO: setup took 10.108563803s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 8 17:21:52.759: INFO: (0) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 6.079781ms) Mar 8 17:21:52.759: INFO: (0) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 6.269749ms) Mar 8 17:21:52.760: INFO: (0) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 6.987437ms) Mar 8 17:21:52.760: INFO: (0) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 8.127431ms) Mar 8 17:21:52.761: INFO: (0) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 8.045224ms) Mar 8 17:21:52.761: INFO: (0) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 8.178968ms) Mar 8 17:21:52.761: INFO: (0) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 8.110963ms) Mar 8 17:21:52.761: INFO: (0) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 8.248293ms) Mar 8 17:21:52.761: INFO: (0) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 8.368212ms) Mar 8 17:21:52.761: INFO: (0) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 8.621673ms) Mar 8 17:21:52.763: INFO: (0) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 10.863033ms) Mar 8 17:21:52.770: INFO: (0) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 17.336067ms) Mar 8 17:21:52.770: INFO: (0) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 17.458941ms) Mar 8 17:21:52.773: INFO: (0) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 20.823566ms) Mar 8 17:21:52.773: INFO: (0) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 20.71686ms) Mar 8 17:21:52.775: INFO: (0) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test<... (200; 7.921138ms) Mar 8 17:21:52.784: INFO: (1) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 9.165994ms) Mar 8 17:21:52.784: INFO: (1) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test (200; 9.528171ms) Mar 8 17:21:52.785: INFO: (1) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 9.990547ms) Mar 8 17:21:52.785: INFO: (1) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 10.239419ms) Mar 8 17:21:52.785: INFO: (1) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 10.265114ms) Mar 8 17:21:52.785: INFO: (1) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 10.407002ms) Mar 8 17:21:52.785: INFO: (1) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 10.390289ms) Mar 8 17:21:52.785: INFO: (1) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 10.413629ms) Mar 8 17:21:52.786: INFO: (1) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 10.465856ms) Mar 8 17:21:52.786: INFO: (1) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 10.671925ms) Mar 8 17:21:52.786: INFO: (1) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 10.994682ms) Mar 8 17:21:52.786: INFO: (1) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 10.988886ms) Mar 8 17:21:52.805: INFO: (2) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 17.908095ms) Mar 8 17:21:52.806: INFO: (2) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 18.270287ms) Mar 8 17:21:52.806: INFO: (2) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 18.62641ms) Mar 8 17:21:52.808: INFO: (2) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 21.175311ms) Mar 8 17:21:52.809: INFO: (2) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 21.651831ms) Mar 8 17:21:52.809: INFO: (2) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: ... (200; 22.453926ms) Mar 8 17:21:52.809: INFO: (2) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 21.748403ms) Mar 8 17:21:52.809: INFO: (2) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 21.505333ms) Mar 8 17:21:52.809: INFO: (2) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 21.416771ms) Mar 8 17:21:52.809: INFO: (2) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 23.067721ms) Mar 8 17:21:52.813: INFO: (2) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 25.143629ms) Mar 8 17:21:52.819: INFO: (3) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 6.396303ms) Mar 8 17:21:52.819: INFO: (3) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 6.585909ms) Mar 8 17:21:52.820: INFO: (3) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 6.644159ms) Mar 8 17:21:52.820: INFO: (3) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 6.852384ms) Mar 8 17:21:52.820: INFO: (3) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 7.180891ms) Mar 8 17:21:52.820: INFO: (3) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 7.191925ms) Mar 8 17:21:52.820: INFO: (3) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 7.234764ms) Mar 8 17:21:52.820: INFO: (3) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 7.390532ms) Mar 8 17:21:52.820: INFO: (3) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 7.579461ms) Mar 8 17:21:52.822: INFO: (3) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test<... (200; 5.623348ms) Mar 8 17:21:52.830: INFO: (4) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 5.826038ms) Mar 8 17:21:52.830: INFO: (4) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 6.207625ms) Mar 8 17:21:52.830: INFO: (4) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 6.104392ms) Mar 8 17:21:52.830: INFO: (4) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 6.521924ms) Mar 8 17:21:52.831: INFO: (4) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 7.783281ms) Mar 8 17:21:52.832: INFO: (4) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 8.165224ms) Mar 8 17:21:52.832: INFO: (4) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 8.297368ms) Mar 8 17:21:52.832: INFO: (4) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 8.849596ms) Mar 8 17:21:52.833: INFO: (4) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 9.304831ms) Mar 8 17:21:52.833: INFO: (4) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 9.140543ms) Mar 8 17:21:52.837: INFO: (5) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.4017ms) Mar 8 17:21:52.838: INFO: (5) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test (200; 5.593846ms) Mar 8 17:21:52.842: INFO: (5) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 8.556402ms) Mar 8 17:21:52.844: INFO: (5) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 11.230144ms) Mar 8 17:21:52.845: INFO: (5) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 11.646563ms) Mar 8 17:21:52.845: INFO: (5) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 11.671264ms) Mar 8 17:21:52.845: INFO: (5) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 11.83765ms) Mar 8 17:21:52.845: INFO: (5) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 11.879758ms) Mar 8 17:21:52.845: INFO: (5) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 12.339227ms) Mar 8 17:21:52.846: INFO: (5) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 13.249281ms) Mar 8 17:21:52.846: INFO: (5) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 13.421614ms) Mar 8 17:21:52.846: INFO: (5) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 13.41582ms) Mar 8 17:21:52.847: INFO: (5) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 13.361504ms) Mar 8 17:21:52.847: INFO: (5) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 13.407568ms) Mar 8 17:21:52.847: INFO: (5) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 13.521196ms) Mar 8 17:21:52.850: INFO: (6) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 3.149308ms) Mar 8 17:21:52.851: INFO: (6) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 3.666722ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 4.393482ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 4.510084ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 5.175942ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 5.02677ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 4.718961ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 5.566221ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 5.364021ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.8594ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 5.256396ms) Mar 8 17:21:52.852: INFO: (6) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.940704ms) Mar 8 17:21:52.853: INFO: (6) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 5.200992ms) Mar 8 17:21:52.853: INFO: (6) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 5.208723ms) Mar 8 17:21:52.853: INFO: (6) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: ... (200; 4.215995ms) Mar 8 17:21:52.857: INFO: (7) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test<... (200; 4.25354ms) Mar 8 17:21:52.857: INFO: (7) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 4.251919ms) Mar 8 17:21:52.857: INFO: (7) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 4.308149ms) Mar 8 17:21:52.857: INFO: (7) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.299907ms) Mar 8 17:21:52.857: INFO: (7) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.350912ms) Mar 8 17:21:52.857: INFO: (7) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 4.412689ms) Mar 8 17:21:52.859: INFO: (7) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 5.910796ms) Mar 8 17:21:52.859: INFO: (7) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 6.033554ms) Mar 8 17:21:52.859: INFO: (7) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 6.102484ms) Mar 8 17:21:52.859: INFO: (7) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 6.140992ms) Mar 8 17:21:52.859: INFO: (7) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 6.214254ms) Mar 8 17:21:52.863: INFO: (8) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 3.493034ms) Mar 8 17:21:52.863: INFO: (8) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.495208ms) Mar 8 17:21:52.863: INFO: (8) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.511817ms) Mar 8 17:21:52.863: INFO: (8) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 3.667589ms) Mar 8 17:21:52.863: INFO: (8) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 3.732067ms) Mar 8 17:21:52.863: INFO: (8) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test (200; 4.559724ms) Mar 8 17:21:52.864: INFO: (8) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 4.521237ms) Mar 8 17:21:52.864: INFO: (8) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 4.53474ms) Mar 8 17:21:52.864: INFO: (8) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 4.570277ms) Mar 8 17:21:52.864: INFO: (8) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 4.578601ms) Mar 8 17:21:52.864: INFO: (8) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 4.621052ms) Mar 8 17:21:52.864: INFO: (8) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 4.683858ms) Mar 8 17:21:52.867: INFO: (9) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.237723ms) Mar 8 17:21:52.867: INFO: (9) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 3.299654ms) Mar 8 17:21:52.868: INFO: (9) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 3.450135ms) Mar 8 17:21:52.869: INFO: (9) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 4.908515ms) Mar 8 17:21:52.869: INFO: (9) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 4.735667ms) Mar 8 17:21:52.869: INFO: (9) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.703945ms) Mar 8 17:21:52.869: INFO: (9) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 4.873372ms) Mar 8 17:21:52.869: INFO: (9) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 5.205066ms) Mar 8 17:21:52.870: INFO: (9) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 5.441756ms) Mar 8 17:21:52.870: INFO: (9) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 5.548871ms) Mar 8 17:21:52.870: INFO: (9) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 5.6896ms) Mar 8 17:21:52.870: INFO: (9) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 5.669431ms) Mar 8 17:21:52.870: INFO: (9) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: ... (200; 4.206575ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 4.282986ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 4.337728ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.305175ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.636186ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 4.589397ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 4.603268ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 4.698288ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 4.858681ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 4.917284ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 4.9255ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 5.085326ms) Mar 8 17:21:52.875: INFO: (10) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 5.10748ms) Mar 8 17:21:52.879: INFO: (11) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.316717ms) Mar 8 17:21:52.879: INFO: (11) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 3.403441ms) Mar 8 17:21:52.879: INFO: (11) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 3.562814ms) Mar 8 17:21:52.879: INFO: (11) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 3.576538ms) Mar 8 17:21:52.880: INFO: (11) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.22781ms) Mar 8 17:21:52.880: INFO: (11) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.32918ms) Mar 8 17:21:52.880: INFO: (11) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test<... (200; 4.903539ms) Mar 8 17:21:52.881: INFO: (11) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 4.934035ms) Mar 8 17:21:52.881: INFO: (11) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 5.003509ms) Mar 8 17:21:52.881: INFO: (11) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 5.055394ms) Mar 8 17:21:52.881: INFO: (11) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 5.107755ms) Mar 8 17:21:52.881: INFO: (11) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 5.112131ms) Mar 8 17:21:52.881: INFO: (11) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 5.178422ms) Mar 8 17:21:52.881: INFO: (11) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 5.26408ms) Mar 8 17:21:52.883: INFO: (12) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 2.067788ms) Mar 8 17:21:52.884: INFO: (12) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 2.901267ms) Mar 8 17:21:52.884: INFO: (12) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 3.027754ms) Mar 8 17:21:52.885: INFO: (12) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 4.018446ms) Mar 8 17:21:52.885: INFO: (12) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 4.106932ms) Mar 8 17:21:52.886: INFO: (12) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 4.739538ms) Mar 8 17:21:52.886: INFO: (12) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.715389ms) Mar 8 17:21:52.886: INFO: (12) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 4.779169ms) Mar 8 17:21:52.886: INFO: (12) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.806075ms) Mar 8 17:21:52.886: INFO: (12) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 4.783587ms) Mar 8 17:21:52.886: INFO: (12) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 4.829829ms) Mar 8 17:21:52.886: INFO: (12) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: ... (200; 2.2917ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 3.489799ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 3.965373ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.891072ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.994203ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 4.089554ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 4.090777ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 4.098796ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 4.17291ms) Mar 8 17:21:52.890: INFO: (13) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: ... (200; 7.931518ms) Mar 8 17:21:52.900: INFO: (14) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 8.241619ms) Mar 8 17:21:52.900: INFO: (14) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 8.249187ms) Mar 8 17:21:52.900: INFO: (14) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 8.199959ms) Mar 8 17:21:52.900: INFO: (14) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 8.281521ms) Mar 8 17:21:52.900: INFO: (14) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 8.318264ms) Mar 8 17:21:52.900: INFO: (14) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 8.285537ms) Mar 8 17:21:52.900: INFO: (14) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 8.277193ms) Mar 8 17:21:52.904: INFO: (15) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test<... (200; 4.035294ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 4.096266ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.074974ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.07291ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 4.071675ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 4.158776ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.164349ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 4.494435ms) Mar 8 17:21:52.905: INFO: (15) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 4.574923ms) Mar 8 17:21:52.906: INFO: (15) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 5.099114ms) Mar 8 17:21:52.906: INFO: (15) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 5.195375ms) Mar 8 17:21:52.906: INFO: (15) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 5.268406ms) Mar 8 17:21:52.906: INFO: (15) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 5.580325ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 4.744959ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 4.729881ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.738865ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 4.757924ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 4.851119ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 4.82146ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 4.970175ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 5.18073ms) Mar 8 17:21:52.911: INFO: (16) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test (200; 5.202069ms) Mar 8 17:21:52.912: INFO: (16) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 5.77149ms) Mar 8 17:21:52.912: INFO: (16) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 5.7157ms) Mar 8 17:21:52.912: INFO: (16) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 5.689705ms) Mar 8 17:21:52.912: INFO: (16) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 5.726568ms) Mar 8 17:21:52.912: INFO: (16) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 5.757741ms) Mar 8 17:21:52.912: INFO: (16) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 5.751054ms) Mar 8 17:21:52.917: INFO: (17) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 5.185745ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 5.474251ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 5.394276ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 5.298878ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 5.422962ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 5.381996ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 5.607182ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 5.434195ms) Mar 8 17:21:52.918: INFO: (17) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: ... (200; 3.296182ms) Mar 8 17:21:52.924: INFO: (18) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 3.330897ms) Mar 8 17:21:52.924: INFO: (18) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 3.398279ms) Mar 8 17:21:52.924: INFO: (18) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 3.41503ms) Mar 8 17:21:52.925: INFO: (18) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 3.571969ms) Mar 8 17:21:52.925: INFO: (18) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: test<... (200; 3.663554ms) Mar 8 17:21:52.925: INFO: (18) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.708141ms) Mar 8 17:21:52.925: INFO: (18) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 3.743116ms) Mar 8 17:21:52.925: INFO: (18) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:162/proxy/: bar (200; 3.65402ms) Mar 8 17:21:52.926: INFO: (18) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname2/proxy/: bar (200; 4.901098ms) Mar 8 17:21:52.926: INFO: (18) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname1/proxy/: foo (200; 5.280808ms) Mar 8 17:21:52.926: INFO: (18) /api/v1/namespaces/proxy-1659/services/proxy-service-q2q46:portname2/proxy/: bar (200; 5.266098ms) Mar 8 17:21:52.926: INFO: (18) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 5.381617ms) Mar 8 17:21:52.926: INFO: (18) /api/v1/namespaces/proxy-1659/services/http:proxy-service-q2q46:portname1/proxy/: foo (200; 5.390456ms) Mar 8 17:21:52.927: INFO: (18) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname1/proxy/: tls baz (200; 5.498138ms) Mar 8 17:21:52.930: INFO: (19) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:462/proxy/: tls qux (200; 3.322921ms) Mar 8 17:21:52.931: INFO: (19) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.150352ms) Mar 8 17:21:52.931: INFO: (19) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:162/proxy/: bar (200; 4.157136ms) Mar 8 17:21:52.931: INFO: (19) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:160/proxy/: foo (200; 4.184212ms) Mar 8 17:21:52.931: INFO: (19) /api/v1/namespaces/proxy-1659/pods/http:proxy-service-q2q46-5tptm:1080/proxy/: ... (200; 4.222588ms) Mar 8 17:21:52.931: INFO: (19) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:460/proxy/: tls baz (200; 4.237831ms) Mar 8 17:21:52.931: INFO: (19) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm/proxy/: test (200; 4.281382ms) Mar 8 17:21:52.931: INFO: (19) /api/v1/namespaces/proxy-1659/services/https:proxy-service-q2q46:tlsportname2/proxy/: tls qux (200; 4.395072ms) Mar 8 17:21:52.933: INFO: (19) /api/v1/namespaces/proxy-1659/pods/proxy-service-q2q46-5tptm:1080/proxy/: test<... (200; 5.964771ms) Mar 8 17:21:52.933: INFO: (19) /api/v1/namespaces/proxy-1659/pods/https:proxy-service-q2q46-5tptm:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0308 17:21:56.412837 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 17:21:56.412: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:21:56.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9583" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":74,"skipped":1343,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:21:56.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:21:56.537: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 15.288552ms) Mar 8 17:21:56.541: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.302654ms) Mar 8 17:21:56.544: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.78989ms) Mar 8 17:21:56.547: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.727091ms) Mar 8 17:21:56.550: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.652136ms) Mar 8 17:21:56.553: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.639798ms) Mar 8 17:21:56.555: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.567291ms) Mar 8 17:21:56.557: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.147625ms) Mar 8 17:21:56.560: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.332464ms) Mar 8 17:21:56.562: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.018815ms) Mar 8 17:21:56.564: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.148338ms) Mar 8 17:21:56.566: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.257274ms) Mar 8 17:21:56.569: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.394193ms) Mar 8 17:21:56.571: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.723296ms) Mar 8 17:21:56.574: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.108087ms) Mar 8 17:21:56.577: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.401237ms) Mar 8 17:21:56.579: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.525372ms) Mar 8 17:21:56.582: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.226532ms) Mar 8 17:21:56.584: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.189921ms) Mar 8 17:21:56.586: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.305176ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:21:56.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7170" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":75,"skipped":1352,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:21:56.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:21:56.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-377b4ea0-c6ab-4749-ae50-c80b65c0292e" in namespace "projected-3290" to be "Succeeded or Failed" Mar 8 17:21:56.676: INFO: Pod "downwardapi-volume-377b4ea0-c6ab-4749-ae50-c80b65c0292e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.795698ms Mar 8 17:21:58.680: INFO: Pod "downwardapi-volume-377b4ea0-c6ab-4749-ae50-c80b65c0292e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02069532s STEP: Saw pod success Mar 8 17:21:58.680: INFO: Pod "downwardapi-volume-377b4ea0-c6ab-4749-ae50-c80b65c0292e" satisfied condition "Succeeded or Failed" Mar 8 17:21:58.683: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-377b4ea0-c6ab-4749-ae50-c80b65c0292e container client-container: STEP: delete the pod Mar 8 17:21:58.701: INFO: Waiting for pod downwardapi-volume-377b4ea0-c6ab-4749-ae50-c80b65c0292e to disappear Mar 8 17:21:58.712: INFO: Pod downwardapi-volume-377b4ea0-c6ab-4749-ae50-c80b65c0292e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:21:58.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3290" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:21:58.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:21:58.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3" in namespace "downward-api-6472" to be "Succeeded or Failed" Mar 8 17:21:58.853: INFO: Pod "downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3": Phase="Pending", Reason="", readiness=false. Elapsed: 46.511064ms Mar 8 17:22:00.857: INFO: Pod "downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050306852s Mar 8 17:22:02.861: INFO: Pod "downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05437987s STEP: Saw pod success Mar 8 17:22:02.861: INFO: Pod "downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3" satisfied condition "Succeeded or Failed" Mar 8 17:22:02.864: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3 container client-container: STEP: delete the pod Mar 8 17:22:02.886: INFO: Waiting for pod downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3 to disappear Mar 8 17:22:02.899: INFO: Pod downwardapi-volume-6eab9866-26ab-40e2-9623-d04a000b52e3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:22:02.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6472" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1393,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:22:02.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:22:03.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2818" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":78,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:22:03.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 8 17:22:03.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5909' Mar 8 17:22:03.433: INFO: stderr: "" Mar 8 17:22:03.433: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 17:22:04.437: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:22:04.437: INFO: Found 0 / 1 Mar 8 17:22:05.531: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:22:05.531: INFO: Found 1 / 1 Mar 8 17:22:05.531: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 8 17:22:05.535: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:22:05.535: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 17:22:05.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config patch pod agnhost-master-tsdjb --namespace=kubectl-5909 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 8 17:22:05.661: INFO: stderr: "" Mar 8 17:22:05.661: INFO: stdout: "pod/agnhost-master-tsdjb patched\n" STEP: checking annotations Mar 8 17:22:05.711: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:22:05.712: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:22:05.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5909" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":79,"skipped":1451,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:22:05.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 17:22:08.347: INFO: Successfully updated pod "pod-update-fb60cc5e-ec48-4669-9751-d904d403fd0c" STEP: verifying the updated pod is in kubernetes Mar 8 17:22:08.360: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:22:08.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7657" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:22:08.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8215 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 17:22:08.432: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 17:22:08.503: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 17:22:10.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:12.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:14.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:16.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:18.507: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:20.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:22.523: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:24.507: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:26.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:22:28.506: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 17:22:28.512: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 17:22:30.535: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.186:8080/dial?request=hostname&protocol=udp&host=10.244.1.185&port=8081&tries=1'] Namespace:pod-network-test-8215 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:22:30.535: INFO: >>> kubeConfig: /root/.kube/config I0308 17:22:30.571075 7 log.go:172] (0xc002cb71e0) (0xc0005655e0) Create stream I0308 17:22:30.571119 7 log.go:172] (0xc002cb71e0) (0xc0005655e0) Stream added, broadcasting: 1 I0308 17:22:30.574428 7 log.go:172] (0xc002cb71e0) Reply frame received for 1 I0308 17:22:30.574458 7 log.go:172] (0xc002cb71e0) (0xc002d29900) Create stream I0308 17:22:30.574471 7 log.go:172] (0xc002cb71e0) (0xc002d29900) Stream added, broadcasting: 3 I0308 17:22:30.576071 7 log.go:172] (0xc002cb71e0) Reply frame received for 3 I0308 17:22:30.576100 7 log.go:172] (0xc002cb71e0) (0xc002a3d720) Create stream I0308 17:22:30.576110 7 log.go:172] (0xc002cb71e0) (0xc002a3d720) Stream added, broadcasting: 5 I0308 17:22:30.577225 7 log.go:172] (0xc002cb71e0) Reply frame received for 5 I0308 17:22:30.633748 7 log.go:172] (0xc002cb71e0) Data frame received for 3 I0308 17:22:30.633827 7 log.go:172] (0xc002d29900) (3) Data frame handling I0308 17:22:30.633861 7 log.go:172] (0xc002d29900) (3) Data frame sent I0308 17:22:30.633925 7 log.go:172] (0xc002cb71e0) Data frame received for 5 I0308 17:22:30.633943 7 log.go:172] (0xc002a3d720) (5) Data frame handling I0308 17:22:30.634059 7 log.go:172] (0xc002cb71e0) Data frame received for 3 I0308 17:22:30.634073 7 log.go:172] (0xc002d29900) (3) Data frame handling I0308 17:22:30.635773 7 log.go:172] (0xc002cb71e0) Data frame received for 1 I0308 17:22:30.635794 7 log.go:172] (0xc0005655e0) (1) Data frame handling I0308 17:22:30.635812 7 log.go:172] (0xc0005655e0) (1) Data frame sent I0308 17:22:30.635825 7 log.go:172] (0xc002cb71e0) (0xc0005655e0) Stream removed, broadcasting: 1 I0308 17:22:30.635839 7 log.go:172] (0xc002cb71e0) Go away received I0308 17:22:30.635922 7 log.go:172] (0xc002cb71e0) (0xc0005655e0) Stream removed, broadcasting: 1 I0308 17:22:30.635947 7 log.go:172] (0xc002cb71e0) (0xc002d29900) Stream removed, broadcasting: 3 I0308 17:22:30.635955 7 log.go:172] (0xc002cb71e0) (0xc002a3d720) Stream removed, broadcasting: 5 Mar 8 17:22:30.635: INFO: Waiting for responses: map[] Mar 8 17:22:30.638: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.186:8080/dial?request=hostname&protocol=udp&host=10.244.2.179&port=8081&tries=1'] Namespace:pod-network-test-8215 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:22:30.638: INFO: >>> kubeConfig: /root/.kube/config I0308 17:22:30.667985 7 log.go:172] (0xc002cb78c0) (0xc000b96460) Create stream I0308 17:22:30.668012 7 log.go:172] (0xc002cb78c0) (0xc000b96460) Stream added, broadcasting: 1 I0308 17:22:30.670054 7 log.go:172] (0xc002cb78c0) Reply frame received for 1 I0308 17:22:30.670086 7 log.go:172] (0xc002cb78c0) (0xc002a3d860) Create stream I0308 17:22:30.670097 7 log.go:172] (0xc002cb78c0) (0xc002a3d860) Stream added, broadcasting: 3 I0308 17:22:30.671034 7 log.go:172] (0xc002cb78c0) Reply frame received for 3 I0308 17:22:30.671068 7 log.go:172] (0xc002cb78c0) (0xc002a3d900) Create stream I0308 17:22:30.671077 7 log.go:172] (0xc002cb78c0) (0xc002a3d900) Stream added, broadcasting: 5 I0308 17:22:30.671815 7 log.go:172] (0xc002cb78c0) Reply frame received for 5 I0308 17:22:30.727451 7 log.go:172] (0xc002cb78c0) Data frame received for 3 I0308 17:22:30.727477 7 log.go:172] (0xc002a3d860) (3) Data frame handling I0308 17:22:30.727490 7 log.go:172] (0xc002a3d860) (3) Data frame sent I0308 17:22:30.727826 7 log.go:172] (0xc002cb78c0) Data frame received for 3 I0308 17:22:30.727845 7 log.go:172] (0xc002a3d860) (3) Data frame handling I0308 17:22:30.728073 7 log.go:172] (0xc002cb78c0) Data frame received for 5 I0308 17:22:30.728087 7 log.go:172] (0xc002a3d900) (5) Data frame handling I0308 17:22:30.729762 7 log.go:172] (0xc002cb78c0) Data frame received for 1 I0308 17:22:30.729774 7 log.go:172] (0xc000b96460) (1) Data frame handling I0308 17:22:30.729780 7 log.go:172] (0xc000b96460) (1) Data frame sent I0308 17:22:30.729792 7 log.go:172] (0xc002cb78c0) (0xc000b96460) Stream removed, broadcasting: 1 I0308 17:22:30.729805 7 log.go:172] (0xc002cb78c0) Go away received I0308 17:22:30.729922 7 log.go:172] (0xc002cb78c0) (0xc000b96460) Stream removed, broadcasting: 1 I0308 17:22:30.729951 7 log.go:172] (0xc002cb78c0) (0xc002a3d860) Stream removed, broadcasting: 3 I0308 17:22:30.729963 7 log.go:172] (0xc002cb78c0) (0xc002a3d900) Stream removed, broadcasting: 5 Mar 8 17:22:30.730: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:22:30.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8215" for this suite. • [SLOW TEST:22.371 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1504,"failed":0} S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:22:30.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-427132c3-7127-471b-9c62-55ec2d5c9522 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:22:34.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9885" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1505,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:22:34.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 17:22:36.970: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:22:37.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4905" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1512,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:22:37.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7335 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-7335 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7335 Mar 8 17:22:37.118: INFO: Found 0 stateful pods, waiting for 1 Mar 8 17:22:47.122: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 8 17:22:47.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7335 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 17:22:47.362: INFO: stderr: "I0308 17:22:47.244136 647 log.go:172] (0xc000a66fd0) (0xc000950780) Create stream\nI0308 17:22:47.244177 647 log.go:172] (0xc000a66fd0) (0xc000950780) Stream added, broadcasting: 1\nI0308 17:22:47.246910 647 log.go:172] (0xc000a66fd0) Reply frame received for 1\nI0308 17:22:47.246995 647 log.go:172] (0xc000a66fd0) (0xc000978280) Create stream\nI0308 17:22:47.247045 647 log.go:172] (0xc000a66fd0) (0xc000978280) Stream added, broadcasting: 3\nI0308 17:22:47.247937 647 log.go:172] (0xc000a66fd0) Reply frame received for 3\nI0308 17:22:47.247978 647 log.go:172] (0xc000a66fd0) (0xc000978000) Create stream\nI0308 17:22:47.247987 647 log.go:172] (0xc000a66fd0) (0xc000978000) Stream added, broadcasting: 5\nI0308 17:22:47.248701 647 log.go:172] (0xc000a66fd0) Reply frame received for 5\nI0308 17:22:47.332333 647 log.go:172] (0xc000a66fd0) Data frame received for 5\nI0308 17:22:47.332354 647 log.go:172] (0xc000978000) (5) Data frame handling\nI0308 17:22:47.332366 647 log.go:172] (0xc000978000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 17:22:47.356553 647 log.go:172] (0xc000a66fd0) Data frame received for 3\nI0308 17:22:47.356579 647 log.go:172] (0xc000978280) (3) Data frame handling\nI0308 17:22:47.356607 647 log.go:172] (0xc000978280) (3) Data frame sent\nI0308 17:22:47.356629 647 log.go:172] (0xc000a66fd0) Data frame received for 3\nI0308 17:22:47.356651 647 log.go:172] (0xc000978280) (3) Data frame handling\nI0308 17:22:47.356841 647 log.go:172] (0xc000a66fd0) Data frame received for 5\nI0308 17:22:47.356867 647 log.go:172] (0xc000978000) (5) Data frame handling\nI0308 17:22:47.358237 647 log.go:172] (0xc000a66fd0) Data frame received for 1\nI0308 17:22:47.358264 647 log.go:172] (0xc000950780) (1) Data frame handling\nI0308 17:22:47.358283 647 log.go:172] (0xc000950780) (1) Data frame sent\nI0308 17:22:47.358317 647 log.go:172] (0xc000a66fd0) (0xc000950780) Stream removed, broadcasting: 1\nI0308 17:22:47.358338 647 log.go:172] (0xc000a66fd0) Go away received\nI0308 17:22:47.358603 647 log.go:172] (0xc000a66fd0) (0xc000950780) Stream removed, broadcasting: 1\nI0308 17:22:47.358621 647 log.go:172] (0xc000a66fd0) (0xc000978280) Stream removed, broadcasting: 3\nI0308 17:22:47.358632 647 log.go:172] (0xc000a66fd0) (0xc000978000) Stream removed, broadcasting: 5\n" Mar 8 17:22:47.362: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 17:22:47.362: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 17:22:47.368: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 17:22:57.373: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 17:22:57.373: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 17:22:57.397: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 17:22:57.397: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC }] Mar 8 17:22:57.397: INFO: ss-1 Pending [] Mar 8 17:22:57.397: INFO: Mar 8 17:22:57.397: INFO: StatefulSet ss has not reached scale 3, at 2 Mar 8 17:22:58.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985722881s Mar 8 17:22:59.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.954607165s Mar 8 17:23:00.438: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.949686793s Mar 8 17:23:01.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.944886416s Mar 8 17:23:02.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.940443643s Mar 8 17:23:03.451: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.935958448s Mar 8 17:23:04.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.932084552s Mar 8 17:23:05.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.92738373s Mar 8 17:23:06.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 922.718078ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7335 Mar 8 17:23:07.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7335 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 17:23:07.697: INFO: stderr: "I0308 17:23:07.608530 667 log.go:172] (0xc0009bb340) (0xc000b14640) Create stream\nI0308 17:23:07.608585 667 log.go:172] (0xc0009bb340) (0xc000b14640) Stream added, broadcasting: 1\nI0308 17:23:07.612659 667 log.go:172] (0xc0009bb340) Reply frame received for 1\nI0308 17:23:07.612706 667 log.go:172] (0xc0009bb340) (0xc000b14000) Create stream\nI0308 17:23:07.612717 667 log.go:172] (0xc0009bb340) (0xc000b14000) Stream added, broadcasting: 3\nI0308 17:23:07.614404 667 log.go:172] (0xc0009bb340) Reply frame received for 3\nI0308 17:23:07.614460 667 log.go:172] (0xc0009bb340) (0xc0006a37c0) Create stream\nI0308 17:23:07.614480 667 log.go:172] (0xc0009bb340) (0xc0006a37c0) Stream added, broadcasting: 5\nI0308 17:23:07.615762 667 log.go:172] (0xc0009bb340) Reply frame received for 5\nI0308 17:23:07.692068 667 log.go:172] (0xc0009bb340) Data frame received for 3\nI0308 17:23:07.692098 667 log.go:172] (0xc000b14000) (3) Data frame handling\nI0308 17:23:07.692106 667 log.go:172] (0xc000b14000) (3) Data frame sent\nI0308 17:23:07.692114 667 log.go:172] (0xc0009bb340) Data frame received for 3\nI0308 17:23:07.692120 667 log.go:172] (0xc000b14000) (3) Data frame handling\nI0308 17:23:07.692143 667 log.go:172] (0xc0009bb340) Data frame received for 5\nI0308 17:23:07.692149 667 log.go:172] (0xc0006a37c0) (5) Data frame handling\nI0308 17:23:07.692155 667 log.go:172] (0xc0006a37c0) (5) Data frame sent\nI0308 17:23:07.692159 667 log.go:172] (0xc0009bb340) Data frame received for 5\nI0308 17:23:07.692164 667 log.go:172] (0xc0006a37c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 17:23:07.693386 667 log.go:172] (0xc0009bb340) Data frame received for 1\nI0308 17:23:07.693402 667 log.go:172] (0xc000b14640) (1) Data frame handling\nI0308 17:23:07.693413 667 log.go:172] (0xc000b14640) (1) Data frame sent\nI0308 17:23:07.693426 667 log.go:172] (0xc0009bb340) (0xc000b14640) Stream removed, broadcasting: 1\nI0308 17:23:07.693693 667 log.go:172] (0xc0009bb340) (0xc000b14640) Stream removed, broadcasting: 1\nI0308 17:23:07.693708 667 log.go:172] (0xc0009bb340) (0xc000b14000) Stream removed, broadcasting: 3\nI0308 17:23:07.693716 667 log.go:172] (0xc0009bb340) (0xc0006a37c0) Stream removed, broadcasting: 5\n" Mar 8 17:23:07.697: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 17:23:07.697: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 17:23:07.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7335 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 17:23:07.888: INFO: stderr: "I0308 17:23:07.812011 687 log.go:172] (0xc000a4f600) (0xc0009588c0) Create stream\nI0308 17:23:07.812060 687 log.go:172] (0xc000a4f600) (0xc0009588c0) Stream added, broadcasting: 1\nI0308 17:23:07.815120 687 log.go:172] (0xc000a4f600) Reply frame received for 1\nI0308 17:23:07.815154 687 log.go:172] (0xc000a4f600) (0xc000665680) Create stream\nI0308 17:23:07.815162 687 log.go:172] (0xc000a4f600) (0xc000665680) Stream added, broadcasting: 3\nI0308 17:23:07.815731 687 log.go:172] (0xc000a4f600) Reply frame received for 3\nI0308 17:23:07.815754 687 log.go:172] (0xc000a4f600) (0xc00051caa0) Create stream\nI0308 17:23:07.815762 687 log.go:172] (0xc000a4f600) (0xc00051caa0) Stream added, broadcasting: 5\nI0308 17:23:07.816274 687 log.go:172] (0xc000a4f600) Reply frame received for 5\nI0308 17:23:07.880855 687 log.go:172] (0xc000a4f600) Data frame received for 5\nI0308 17:23:07.880898 687 log.go:172] (0xc000a4f600) Data frame received for 3\nI0308 17:23:07.880931 687 log.go:172] (0xc000665680) (3) Data frame handling\nI0308 17:23:07.880948 687 log.go:172] (0xc00051caa0) (5) Data frame handling\nI0308 17:23:07.880981 687 log.go:172] (0xc00051caa0) (5) Data frame sent\nI0308 17:23:07.880994 687 log.go:172] (0xc000a4f600) Data frame received for 5\nI0308 17:23:07.881001 687 log.go:172] (0xc00051caa0) (5) Data frame handling\nI0308 17:23:07.881011 687 log.go:172] (0xc000665680) (3) Data frame sent\nI0308 17:23:07.881018 687 log.go:172] (0xc000a4f600) Data frame received for 3\nI0308 17:23:07.881025 687 log.go:172] (0xc000665680) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 17:23:07.882372 687 log.go:172] (0xc000a4f600) Data frame received for 1\nI0308 17:23:07.882387 687 log.go:172] (0xc0009588c0) (1) Data frame handling\nI0308 17:23:07.882406 687 log.go:172] (0xc0009588c0) (1) Data frame sent\nI0308 17:23:07.882418 687 log.go:172] (0xc000a4f600) (0xc0009588c0) Stream removed, broadcasting: 1\nI0308 17:23:07.882445 687 log.go:172] (0xc000a4f600) Go away received\nI0308 17:23:07.882663 687 log.go:172] (0xc000a4f600) (0xc0009588c0) Stream removed, broadcasting: 1\nI0308 17:23:07.882674 687 log.go:172] (0xc000a4f600) (0xc000665680) Stream removed, broadcasting: 3\nI0308 17:23:07.882680 687 log.go:172] (0xc000a4f600) (0xc00051caa0) Stream removed, broadcasting: 5\n" Mar 8 17:23:07.888: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 17:23:07.888: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 17:23:07.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7335 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 17:23:08.048: INFO: stderr: "I0308 17:23:07.985559 707 log.go:172] (0xc000a32c60) (0xc00091a3c0) Create stream\nI0308 17:23:07.985598 707 log.go:172] (0xc000a32c60) (0xc00091a3c0) Stream added, broadcasting: 1\nI0308 17:23:07.987342 707 log.go:172] (0xc000a32c60) Reply frame received for 1\nI0308 17:23:07.987379 707 log.go:172] (0xc000a32c60) (0xc000a280a0) Create stream\nI0308 17:23:07.987389 707 log.go:172] (0xc000a32c60) (0xc000a280a0) Stream added, broadcasting: 3\nI0308 17:23:07.988192 707 log.go:172] (0xc000a32c60) Reply frame received for 3\nI0308 17:23:07.988223 707 log.go:172] (0xc000a32c60) (0xc0009a03c0) Create stream\nI0308 17:23:07.988239 707 log.go:172] (0xc000a32c60) (0xc0009a03c0) Stream added, broadcasting: 5\nI0308 17:23:07.988838 707 log.go:172] (0xc000a32c60) Reply frame received for 5\nI0308 17:23:08.044018 707 log.go:172] (0xc000a32c60) Data frame received for 3\nI0308 17:23:08.044042 707 log.go:172] (0xc000a280a0) (3) Data frame handling\nI0308 17:23:08.044057 707 log.go:172] (0xc000a280a0) (3) Data frame sent\nI0308 17:23:08.044412 707 log.go:172] (0xc000a32c60) Data frame received for 3\nI0308 17:23:08.044438 707 log.go:172] (0xc000a280a0) (3) Data frame handling\nI0308 17:23:08.044454 707 log.go:172] (0xc000a32c60) Data frame received for 5\nI0308 17:23:08.044468 707 log.go:172] (0xc0009a03c0) (5) Data frame handling\nI0308 17:23:08.044477 707 log.go:172] (0xc0009a03c0) (5) Data frame sent\nI0308 17:23:08.044488 707 log.go:172] (0xc000a32c60) Data frame received for 5\nI0308 17:23:08.044496 707 log.go:172] (0xc0009a03c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 17:23:08.045466 707 log.go:172] (0xc000a32c60) Data frame received for 1\nI0308 17:23:08.045484 707 log.go:172] (0xc00091a3c0) (1) Data frame handling\nI0308 17:23:08.045499 707 log.go:172] (0xc00091a3c0) (1) Data frame sent\nI0308 17:23:08.045514 707 log.go:172] (0xc000a32c60) (0xc00091a3c0) Stream removed, broadcasting: 1\nI0308 17:23:08.045700 707 log.go:172] (0xc000a32c60) Go away received\nI0308 17:23:08.045790 707 log.go:172] (0xc000a32c60) (0xc00091a3c0) Stream removed, broadcasting: 1\nI0308 17:23:08.045808 707 log.go:172] (0xc000a32c60) (0xc000a280a0) Stream removed, broadcasting: 3\nI0308 17:23:08.045816 707 log.go:172] (0xc000a32c60) (0xc0009a03c0) Stream removed, broadcasting: 5\n" Mar 8 17:23:08.049: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 17:23:08.049: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 17:23:08.052: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 8 17:23:18.057: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:23:18.057: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:23:18.057: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 8 17:23:18.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7335 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 17:23:18.245: INFO: stderr: "I0308 17:23:18.174246 727 log.go:172] (0xc00003a0b0) (0xc00097c000) Create stream\nI0308 17:23:18.174299 727 log.go:172] (0xc00003a0b0) (0xc00097c000) Stream added, broadcasting: 1\nI0308 17:23:18.176127 727 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0308 17:23:18.176150 727 log.go:172] (0xc00003a0b0) (0xc00097c140) Create stream\nI0308 17:23:18.176158 727 log.go:172] (0xc00003a0b0) (0xc00097c140) Stream added, broadcasting: 3\nI0308 17:23:18.176895 727 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0308 17:23:18.176933 727 log.go:172] (0xc00003a0b0) (0xc0006fb220) Create stream\nI0308 17:23:18.176945 727 log.go:172] (0xc00003a0b0) (0xc0006fb220) Stream added, broadcasting: 5\nI0308 17:23:18.177944 727 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0308 17:23:18.237472 727 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 17:23:18.237506 727 log.go:172] (0xc0006fb220) (5) Data frame handling\nI0308 17:23:18.237528 727 log.go:172] (0xc0006fb220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 17:23:18.237706 727 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0308 17:23:18.237720 727 log.go:172] (0xc00097c140) (3) Data frame handling\nI0308 17:23:18.237728 727 log.go:172] (0xc00097c140) (3) Data frame sent\nI0308 17:23:18.237734 727 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0308 17:23:18.237739 727 log.go:172] (0xc00097c140) (3) Data frame handling\nI0308 17:23:18.238108 727 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 17:23:18.238156 727 log.go:172] (0xc0006fb220) (5) Data frame handling\nI0308 17:23:18.239721 727 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0308 17:23:18.239741 727 log.go:172] (0xc00097c000) (1) Data frame handling\nI0308 17:23:18.239779 727 log.go:172] (0xc00097c000) (1) Data frame sent\nI0308 17:23:18.239799 727 log.go:172] (0xc00003a0b0) (0xc00097c000) Stream removed, broadcasting: 1\nI0308 17:23:18.239919 727 log.go:172] (0xc00003a0b0) Go away received\nI0308 17:23:18.240194 727 log.go:172] (0xc00003a0b0) (0xc00097c000) Stream removed, broadcasting: 1\nI0308 17:23:18.240217 727 log.go:172] (0xc00003a0b0) (0xc00097c140) Stream removed, broadcasting: 3\nI0308 17:23:18.240230 727 log.go:172] (0xc00003a0b0) (0xc0006fb220) Stream removed, broadcasting: 5\n" Mar 8 17:23:18.245: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 17:23:18.245: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 17:23:18.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7335 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 17:23:18.482: INFO: stderr: "I0308 17:23:18.360045 748 log.go:172] (0xc000b18160) (0xc000af21e0) Create stream\nI0308 17:23:18.360093 748 log.go:172] (0xc000b18160) (0xc000af21e0) Stream added, broadcasting: 1\nI0308 17:23:18.364248 748 log.go:172] (0xc000b18160) Reply frame received for 1\nI0308 17:23:18.364287 748 log.go:172] (0xc000b18160) (0xc000afe140) Create stream\nI0308 17:23:18.364301 748 log.go:172] (0xc000b18160) (0xc000afe140) Stream added, broadcasting: 3\nI0308 17:23:18.365282 748 log.go:172] (0xc000b18160) Reply frame received for 3\nI0308 17:23:18.365306 748 log.go:172] (0xc000b18160) (0xc000afe1e0) Create stream\nI0308 17:23:18.365317 748 log.go:172] (0xc000b18160) (0xc000afe1e0) Stream added, broadcasting: 5\nI0308 17:23:18.366079 748 log.go:172] (0xc000b18160) Reply frame received for 5\nI0308 17:23:18.432269 748 log.go:172] (0xc000b18160) Data frame received for 5\nI0308 17:23:18.432292 748 log.go:172] (0xc000afe1e0) (5) Data frame handling\nI0308 17:23:18.432306 748 log.go:172] (0xc000afe1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 17:23:18.476696 748 log.go:172] (0xc000b18160) Data frame received for 3\nI0308 17:23:18.476720 748 log.go:172] (0xc000afe140) (3) Data frame handling\nI0308 17:23:18.476734 748 log.go:172] (0xc000afe140) (3) Data frame sent\nI0308 17:23:18.476810 748 log.go:172] (0xc000b18160) Data frame received for 5\nI0308 17:23:18.476826 748 log.go:172] (0xc000afe1e0) (5) Data frame handling\nI0308 17:23:18.476849 748 log.go:172] (0xc000b18160) Data frame received for 3\nI0308 17:23:18.476869 748 log.go:172] (0xc000afe140) (3) Data frame handling\nI0308 17:23:18.478703 748 log.go:172] (0xc000b18160) Data frame received for 1\nI0308 17:23:18.478722 748 log.go:172] (0xc000af21e0) (1) Data frame handling\nI0308 17:23:18.478740 748 log.go:172] (0xc000af21e0) (1) Data frame sent\nI0308 17:23:18.478843 748 log.go:172] (0xc000b18160) (0xc000af21e0) Stream removed, broadcasting: 1\nI0308 17:23:18.478915 748 log.go:172] (0xc000b18160) Go away received\nI0308 17:23:18.479114 748 log.go:172] (0xc000b18160) (0xc000af21e0) Stream removed, broadcasting: 1\nI0308 17:23:18.479126 748 log.go:172] (0xc000b18160) (0xc000afe140) Stream removed, broadcasting: 3\nI0308 17:23:18.479131 748 log.go:172] (0xc000b18160) (0xc000afe1e0) Stream removed, broadcasting: 5\n" Mar 8 17:23:18.482: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 17:23:18.482: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 17:23:18.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7335 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 17:23:18.674: INFO: stderr: "I0308 17:23:18.584742 768 log.go:172] (0xc0005818c0) (0xc0004ceaa0) Create stream\nI0308 17:23:18.584784 768 log.go:172] (0xc0005818c0) (0xc0004ceaa0) Stream added, broadcasting: 1\nI0308 17:23:18.586445 768 log.go:172] (0xc0005818c0) Reply frame received for 1\nI0308 17:23:18.586476 768 log.go:172] (0xc0005818c0) (0xc000904000) Create stream\nI0308 17:23:18.586485 768 log.go:172] (0xc0005818c0) (0xc000904000) Stream added, broadcasting: 3\nI0308 17:23:18.587130 768 log.go:172] (0xc0005818c0) Reply frame received for 3\nI0308 17:23:18.587149 768 log.go:172] (0xc0005818c0) (0xc0004ceb40) Create stream\nI0308 17:23:18.587156 768 log.go:172] (0xc0005818c0) (0xc0004ceb40) Stream added, broadcasting: 5\nI0308 17:23:18.587749 768 log.go:172] (0xc0005818c0) Reply frame received for 5\nI0308 17:23:18.648623 768 log.go:172] (0xc0005818c0) Data frame received for 5\nI0308 17:23:18.648646 768 log.go:172] (0xc0004ceb40) (5) Data frame handling\nI0308 17:23:18.648667 768 log.go:172] (0xc0004ceb40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 17:23:18.668374 768 log.go:172] (0xc0005818c0) Data frame received for 3\nI0308 17:23:18.668398 768 log.go:172] (0xc000904000) (3) Data frame handling\nI0308 17:23:18.668412 768 log.go:172] (0xc000904000) (3) Data frame sent\nI0308 17:23:18.668712 768 log.go:172] (0xc0005818c0) Data frame received for 5\nI0308 17:23:18.668738 768 log.go:172] (0xc0004ceb40) (5) Data frame handling\nI0308 17:23:18.668786 768 log.go:172] (0xc0005818c0) Data frame received for 3\nI0308 17:23:18.668806 768 log.go:172] (0xc000904000) (3) Data frame handling\nI0308 17:23:18.670170 768 log.go:172] (0xc0005818c0) Data frame received for 1\nI0308 17:23:18.670194 768 log.go:172] (0xc0004ceaa0) (1) Data frame handling\nI0308 17:23:18.670211 768 log.go:172] (0xc0004ceaa0) (1) Data frame sent\nI0308 17:23:18.670233 768 log.go:172] (0xc0005818c0) (0xc0004ceaa0) Stream removed, broadcasting: 1\nI0308 17:23:18.670255 768 log.go:172] (0xc0005818c0) Go away received\nI0308 17:23:18.670562 768 log.go:172] (0xc0005818c0) (0xc0004ceaa0) Stream removed, broadcasting: 1\nI0308 17:23:18.670583 768 log.go:172] (0xc0005818c0) (0xc000904000) Stream removed, broadcasting: 3\nI0308 17:23:18.670595 768 log.go:172] (0xc0005818c0) (0xc0004ceb40) Stream removed, broadcasting: 5\n" Mar 8 17:23:18.674: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 17:23:18.674: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 17:23:18.674: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 17:23:18.677: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 8 17:23:28.686: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 17:23:28.686: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 17:23:28.686: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 17:23:28.697: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 17:23:28.697: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC }] Mar 8 17:23:28.697: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC }] Mar 8 17:23:28.697: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC }] Mar 8 17:23:28.697: INFO: Mar 8 17:23:28.697: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 17:23:29.721: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 17:23:29.721: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC }] Mar 8 17:23:29.721: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC }] Mar 8 17:23:29.721: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC }] Mar 8 17:23:29.721: INFO: Mar 8 17:23:29.721: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 17:23:30.742: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 17:23:30.742: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:37 +0000 UTC }] Mar 8 17:23:30.742: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:23:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 17:22:57 +0000 UTC }] Mar 8 17:23:30.742: INFO: Mar 8 17:23:30.742: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 8 17:23:31.745: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.950329047s Mar 8 17:23:32.748: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.94761162s Mar 8 17:23:33.751: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.944123355s Mar 8 17:23:34.753: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.941552452s Mar 8 17:23:35.758: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.93894002s Mar 8 17:23:36.761: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.934607075s Mar 8 17:23:37.766: INFO: Verifying statefulset ss doesn't scale past 0 for another 930.807176ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7335 Mar 8 17:23:38.776: INFO: Scaling statefulset ss to 0 Mar 8 17:23:38.785: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 17:23:38.788: INFO: Deleting all statefulset in ns statefulset-7335 Mar 8 17:23:38.792: INFO: Scaling statefulset ss to 0 Mar 8 17:23:38.801: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 17:23:38.804: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:23:38.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7335" for this suite. • [SLOW TEST:61.818 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":84,"skipped":1518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:23:38.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:23:39.003: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6eb41dfc-09b0-4609-8105-9d793b3b3fdb", Controller:(*bool)(0xc002e3f6a6), BlockOwnerDeletion:(*bool)(0xc002e3f6a7)}} Mar 8 17:23:39.011: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"16b7b1bc-a40c-416b-a7fb-13f6dd88e687", Controller:(*bool)(0xc0029d9bd6), BlockOwnerDeletion:(*bool)(0xc0029d9bd7)}} Mar 8 17:23:39.055: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b1e8e12b-05cd-4023-a2c6-1f1a183ed195", Controller:(*bool)(0xc002e3f88a), BlockOwnerDeletion:(*bool)(0xc002e3f88b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:23:44.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3278" for this suite. • [SLOW TEST:5.287 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":85,"skipped":1570,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:23:44.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 8 17:23:48.711: INFO: Successfully updated pod "adopt-release-7k6qx" STEP: Checking that the Job readopts the Pod Mar 8 17:23:48.711: INFO: Waiting up to 15m0s for pod "adopt-release-7k6qx" in namespace "job-1604" to be "adopted" Mar 8 17:23:48.729: INFO: Pod "adopt-release-7k6qx": Phase="Running", Reason="", readiness=true. Elapsed: 17.790108ms Mar 8 17:23:50.733: INFO: Pod "adopt-release-7k6qx": Phase="Running", Reason="", readiness=true. Elapsed: 2.02209363s Mar 8 17:23:50.733: INFO: Pod "adopt-release-7k6qx" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 8 17:23:51.243: INFO: Successfully updated pod "adopt-release-7k6qx" STEP: Checking that the Job releases the Pod Mar 8 17:23:51.243: INFO: Waiting up to 15m0s for pod "adopt-release-7k6qx" in namespace "job-1604" to be "released" Mar 8 17:23:51.513: INFO: Pod "adopt-release-7k6qx": Phase="Running", Reason="", readiness=true. Elapsed: 270.349628ms Mar 8 17:23:53.517: INFO: Pod "adopt-release-7k6qx": Phase="Running", Reason="", readiness=true. Elapsed: 2.274359044s Mar 8 17:23:53.517: INFO: Pod "adopt-release-7k6qx" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:23:53.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1604" for this suite. • [SLOW TEST:9.410 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":86,"skipped":1570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:23:53.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:23:55.620: INFO: Waiting up to 5m0s for pod "client-envvars-3249e221-fa65-439a-9047-db461ade3cae" in namespace "pods-1316" to be "Succeeded or Failed" Mar 8 17:23:55.625: INFO: Pod "client-envvars-3249e221-fa65-439a-9047-db461ade3cae": Phase="Pending", Reason="", readiness=false. Elapsed: 5.632915ms Mar 8 17:23:57.629: INFO: Pod "client-envvars-3249e221-fa65-439a-9047-db461ade3cae": Phase="Running", Reason="", readiness=true. Elapsed: 2.009217549s Mar 8 17:23:59.633: INFO: Pod "client-envvars-3249e221-fa65-439a-9047-db461ade3cae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013115916s STEP: Saw pod success Mar 8 17:23:59.633: INFO: Pod "client-envvars-3249e221-fa65-439a-9047-db461ade3cae" satisfied condition "Succeeded or Failed" Mar 8 17:23:59.636: INFO: Trying to get logs from node latest-worker2 pod client-envvars-3249e221-fa65-439a-9047-db461ade3cae container env3cont: STEP: delete the pod Mar 8 17:23:59.674: INFO: Waiting for pod client-envvars-3249e221-fa65-439a-9047-db461ade3cae to disappear Mar 8 17:23:59.680: INFO: Pod client-envvars-3249e221-fa65-439a-9047-db461ade3cae no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:23:59.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1316" for this suite. • [SLOW TEST:6.162 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1595,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:23:59.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:24:01.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-60" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1602,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:24:01.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:24:01.838: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7711b85-877c-4d3f-b3b9-5bf6bd710710" in namespace "downward-api-1497" to be "Succeeded or Failed" Mar 8 17:24:01.863: INFO: Pod "downwardapi-volume-d7711b85-877c-4d3f-b3b9-5bf6bd710710": Phase="Pending", Reason="", readiness=false. Elapsed: 24.505197ms Mar 8 17:24:03.866: INFO: Pod "downwardapi-volume-d7711b85-877c-4d3f-b3b9-5bf6bd710710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027876016s STEP: Saw pod success Mar 8 17:24:03.866: INFO: Pod "downwardapi-volume-d7711b85-877c-4d3f-b3b9-5bf6bd710710" satisfied condition "Succeeded or Failed" Mar 8 17:24:03.870: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d7711b85-877c-4d3f-b3b9-5bf6bd710710 container client-container: STEP: delete the pod Mar 8 17:24:03.908: INFO: Waiting for pod downwardapi-volume-d7711b85-877c-4d3f-b3b9-5bf6bd710710 to disappear Mar 8 17:24:03.911: INFO: Pod downwardapi-volume-d7711b85-877c-4d3f-b3b9-5bf6bd710710 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:24:03.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1497" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1612,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:24:03.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 8 17:24:07.048: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:24:08.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5866" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":90,"skipped":1623,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:24:08.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 8 17:24:08.161: INFO: Waiting up to 5m0s for pod "downward-api-cd1e1d7f-024f-46d6-95ff-e6bdf4219791" in namespace "downward-api-323" to be "Succeeded or Failed" Mar 8 17:24:08.176: INFO: Pod "downward-api-cd1e1d7f-024f-46d6-95ff-e6bdf4219791": Phase="Pending", Reason="", readiness=false. Elapsed: 14.373255ms Mar 8 17:24:10.178: INFO: Pod "downward-api-cd1e1d7f-024f-46d6-95ff-e6bdf4219791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017021976s STEP: Saw pod success Mar 8 17:24:10.178: INFO: Pod "downward-api-cd1e1d7f-024f-46d6-95ff-e6bdf4219791" satisfied condition "Succeeded or Failed" Mar 8 17:24:10.181: INFO: Trying to get logs from node latest-worker2 pod downward-api-cd1e1d7f-024f-46d6-95ff-e6bdf4219791 container dapi-container: STEP: delete the pod Mar 8 17:24:10.196: INFO: Waiting for pod downward-api-cd1e1d7f-024f-46d6-95ff-e6bdf4219791 to disappear Mar 8 17:24:10.215: INFO: Pod downward-api-cd1e1d7f-024f-46d6-95ff-e6bdf4219791 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:24:10.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-323" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:24:10.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 8 17:24:10.395: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:24:14.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1435" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":92,"skipped":1704,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:24:14.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Mar 8 17:24:15.028: INFO: Waiting up to 5m0s for pod "client-containers-c5ed5b3d-3488-47f2-a9aa-3b1de0d0d081" in namespace "containers-7284" to be "Succeeded or Failed" Mar 8 17:24:15.032: INFO: Pod "client-containers-c5ed5b3d-3488-47f2-a9aa-3b1de0d0d081": Phase="Pending", Reason="", readiness=false. Elapsed: 3.868434ms Mar 8 17:24:17.035: INFO: Pod "client-containers-c5ed5b3d-3488-47f2-a9aa-3b1de0d0d081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007431916s STEP: Saw pod success Mar 8 17:24:17.035: INFO: Pod "client-containers-c5ed5b3d-3488-47f2-a9aa-3b1de0d0d081" satisfied condition "Succeeded or Failed" Mar 8 17:24:17.038: INFO: Trying to get logs from node latest-worker pod client-containers-c5ed5b3d-3488-47f2-a9aa-3b1de0d0d081 container test-container: STEP: delete the pod Mar 8 17:24:17.057: INFO: Waiting for pod client-containers-c5ed5b3d-3488-47f2-a9aa-3b1de0d0d081 to disappear Mar 8 17:24:17.062: INFO: Pod client-containers-c5ed5b3d-3488-47f2-a9aa-3b1de0d0d081 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:24:17.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7284" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1705,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:24:17.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6528 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 8 17:24:17.170: INFO: Found 0 stateful pods, waiting for 3 Mar 8 17:24:27.175: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:24:27.175: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:24:27.175: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 17:24:27.203: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 8 17:24:37.233: INFO: Updating stateful set ss2 Mar 8 17:24:37.298: INFO: Waiting for Pod statefulset-6528/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 8 17:24:47.389: INFO: Found 2 stateful pods, waiting for 3 Mar 8 17:24:57.393: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:24:57.393: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:24:57.393: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 8 17:24:57.415: INFO: Updating stateful set ss2 Mar 8 17:24:57.453: INFO: Waiting for Pod statefulset-6528/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 17:25:07.474: INFO: Updating stateful set ss2 Mar 8 17:25:07.489: INFO: Waiting for StatefulSet statefulset-6528/ss2 to complete update Mar 8 17:25:07.489: INFO: Waiting for Pod statefulset-6528/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 17:25:17.496: INFO: Deleting all statefulset in ns statefulset-6528 Mar 8 17:25:17.498: INFO: Scaling statefulset ss2 to 0 Mar 8 17:25:47.519: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 17:25:47.523: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:25:47.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6528" for this suite. • [SLOW TEST:90.481 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":94,"skipped":1713,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:25:47.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-7210 STEP: creating replication controller nodeport-test in namespace services-7210 I0308 17:25:47.646778 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7210, replica count: 2 I0308 17:25:50.697255 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 17:25:50.697: INFO: Creating new exec pod Mar 8 17:25:53.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7210 execpodrcpvq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 8 17:25:53.976: INFO: stderr: "I0308 17:25:53.911233 788 log.go:172] (0xc0003c5810) (0xc0007a95e0) Create stream\nI0308 17:25:53.911286 788 log.go:172] (0xc0003c5810) (0xc0007a95e0) Stream added, broadcasting: 1\nI0308 17:25:53.913502 788 log.go:172] (0xc0003c5810) Reply frame received for 1\nI0308 17:25:53.913546 788 log.go:172] (0xc0003c5810) (0xc000942000) Create stream\nI0308 17:25:53.913559 788 log.go:172] (0xc0003c5810) (0xc000942000) Stream added, broadcasting: 3\nI0308 17:25:53.914613 788 log.go:172] (0xc0003c5810) Reply frame received for 3\nI0308 17:25:53.914645 788 log.go:172] (0xc0003c5810) (0xc0009420a0) Create stream\nI0308 17:25:53.914654 788 log.go:172] (0xc0003c5810) (0xc0009420a0) Stream added, broadcasting: 5\nI0308 17:25:53.915627 788 log.go:172] (0xc0003c5810) Reply frame received for 5\nI0308 17:25:53.968405 788 log.go:172] (0xc0003c5810) Data frame received for 5\nI0308 17:25:53.968429 788 log.go:172] (0xc0009420a0) (5) Data frame handling\nI0308 17:25:53.968443 788 log.go:172] (0xc0009420a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0308 17:25:53.969186 788 log.go:172] (0xc0003c5810) Data frame received for 5\nI0308 17:25:53.969210 788 log.go:172] (0xc0009420a0) (5) Data frame handling\nI0308 17:25:53.969227 788 log.go:172] (0xc0009420a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0308 17:25:53.969563 788 log.go:172] (0xc0003c5810) Data frame received for 5\nI0308 17:25:53.969589 788 log.go:172] (0xc0009420a0) (5) Data frame handling\nI0308 17:25:53.969836 788 log.go:172] (0xc0003c5810) Data frame received for 3\nI0308 17:25:53.969855 788 log.go:172] (0xc000942000) (3) Data frame handling\nI0308 17:25:53.971870 788 log.go:172] (0xc0003c5810) Data frame received for 1\nI0308 17:25:53.971893 788 log.go:172] (0xc0007a95e0) (1) Data frame handling\nI0308 17:25:53.971909 788 log.go:172] (0xc0007a95e0) (1) Data frame sent\nI0308 17:25:53.971924 788 log.go:172] (0xc0003c5810) (0xc0007a95e0) Stream removed, broadcasting: 1\nI0308 17:25:53.971939 788 log.go:172] (0xc0003c5810) Go away received\nI0308 17:25:53.972282 788 log.go:172] (0xc0003c5810) (0xc0007a95e0) Stream removed, broadcasting: 1\nI0308 17:25:53.972298 788 log.go:172] (0xc0003c5810) (0xc000942000) Stream removed, broadcasting: 3\nI0308 17:25:53.972305 788 log.go:172] (0xc0003c5810) (0xc0009420a0) Stream removed, broadcasting: 5\n" Mar 8 17:25:53.976: INFO: stdout: "" Mar 8 17:25:53.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7210 execpodrcpvq -- /bin/sh -x -c nc -zv -t -w 2 10.96.248.146 80' Mar 8 17:25:54.153: INFO: stderr: "I0308 17:25:54.092257 810 log.go:172] (0xc0009cb1e0) (0xc000984780) Create stream\nI0308 17:25:54.092300 810 log.go:172] (0xc0009cb1e0) (0xc000984780) Stream added, broadcasting: 1\nI0308 17:25:54.095893 810 log.go:172] (0xc0009cb1e0) Reply frame received for 1\nI0308 17:25:54.095925 810 log.go:172] (0xc0009cb1e0) (0xc000621540) Create stream\nI0308 17:25:54.095935 810 log.go:172] (0xc0009cb1e0) (0xc000621540) Stream added, broadcasting: 3\nI0308 17:25:54.096661 810 log.go:172] (0xc0009cb1e0) Reply frame received for 3\nI0308 17:25:54.096687 810 log.go:172] (0xc0009cb1e0) (0xc0002b0960) Create stream\nI0308 17:25:54.096697 810 log.go:172] (0xc0009cb1e0) (0xc0002b0960) Stream added, broadcasting: 5\nI0308 17:25:54.097343 810 log.go:172] (0xc0009cb1e0) Reply frame received for 5\nI0308 17:25:54.147823 810 log.go:172] (0xc0009cb1e0) Data frame received for 5\nI0308 17:25:54.147849 810 log.go:172] (0xc0002b0960) (5) Data frame handling\nI0308 17:25:54.147868 810 log.go:172] (0xc0002b0960) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.248.146 80\nConnection to 10.96.248.146 80 port [tcp/http] succeeded!\nI0308 17:25:54.147949 810 log.go:172] (0xc0009cb1e0) Data frame received for 5\nI0308 17:25:54.147973 810 log.go:172] (0xc0009cb1e0) Data frame received for 3\nI0308 17:25:54.147984 810 log.go:172] (0xc000621540) (3) Data frame handling\nI0308 17:25:54.148025 810 log.go:172] (0xc0002b0960) (5) Data frame handling\nI0308 17:25:54.149156 810 log.go:172] (0xc0009cb1e0) Data frame received for 1\nI0308 17:25:54.149205 810 log.go:172] (0xc000984780) (1) Data frame handling\nI0308 17:25:54.149218 810 log.go:172] (0xc000984780) (1) Data frame sent\nI0308 17:25:54.149230 810 log.go:172] (0xc0009cb1e0) (0xc000984780) Stream removed, broadcasting: 1\nI0308 17:25:54.149244 810 log.go:172] (0xc0009cb1e0) Go away received\nI0308 17:25:54.149608 810 log.go:172] (0xc0009cb1e0) (0xc000984780) Stream removed, broadcasting: 1\nI0308 17:25:54.149624 810 log.go:172] (0xc0009cb1e0) (0xc000621540) Stream removed, broadcasting: 3\nI0308 17:25:54.149634 810 log.go:172] (0xc0009cb1e0) (0xc0002b0960) Stream removed, broadcasting: 5\n" Mar 8 17:25:54.153: INFO: stdout: "" Mar 8 17:25:54.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7210 execpodrcpvq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 30628' Mar 8 17:25:54.321: INFO: stderr: "I0308 17:25:54.254064 830 log.go:172] (0xc0009b8210) (0xc0005312c0) Create stream\nI0308 17:25:54.254099 830 log.go:172] (0xc0009b8210) (0xc0005312c0) Stream added, broadcasting: 1\nI0308 17:25:54.255767 830 log.go:172] (0xc0009b8210) Reply frame received for 1\nI0308 17:25:54.255796 830 log.go:172] (0xc0009b8210) (0xc0007da000) Create stream\nI0308 17:25:54.255805 830 log.go:172] (0xc0009b8210) (0xc0007da000) Stream added, broadcasting: 3\nI0308 17:25:54.256337 830 log.go:172] (0xc0009b8210) Reply frame received for 3\nI0308 17:25:54.256365 830 log.go:172] (0xc0009b8210) (0xc000814000) Create stream\nI0308 17:25:54.256374 830 log.go:172] (0xc0009b8210) (0xc000814000) Stream added, broadcasting: 5\nI0308 17:25:54.256937 830 log.go:172] (0xc0009b8210) Reply frame received for 5\nI0308 17:25:54.316276 830 log.go:172] (0xc0009b8210) Data frame received for 5\nI0308 17:25:54.316299 830 log.go:172] (0xc000814000) (5) Data frame handling\nI0308 17:25:54.316315 830 log.go:172] (0xc000814000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.16 30628\nConnection to 172.17.0.16 30628 port [tcp/30628] succeeded!\nI0308 17:25:54.316456 830 log.go:172] (0xc0009b8210) Data frame received for 5\nI0308 17:25:54.316471 830 log.go:172] (0xc000814000) (5) Data frame handling\nI0308 17:25:54.316588 830 log.go:172] (0xc0009b8210) Data frame received for 3\nI0308 17:25:54.316603 830 log.go:172] (0xc0007da000) (3) Data frame handling\nI0308 17:25:54.317723 830 log.go:172] (0xc0009b8210) Data frame received for 1\nI0308 17:25:54.317741 830 log.go:172] (0xc0005312c0) (1) Data frame handling\nI0308 17:25:54.317751 830 log.go:172] (0xc0005312c0) (1) Data frame sent\nI0308 17:25:54.317767 830 log.go:172] (0xc0009b8210) (0xc0005312c0) Stream removed, broadcasting: 1\nI0308 17:25:54.317782 830 log.go:172] (0xc0009b8210) Go away received\nI0308 17:25:54.318065 830 log.go:172] (0xc0009b8210) (0xc0005312c0) Stream removed, broadcasting: 1\nI0308 17:25:54.318080 830 log.go:172] (0xc0009b8210) (0xc0007da000) Stream removed, broadcasting: 3\nI0308 17:25:54.318086 830 log.go:172] (0xc0009b8210) (0xc000814000) Stream removed, broadcasting: 5\n" Mar 8 17:25:54.321: INFO: stdout: "" Mar 8 17:25:54.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7210 execpodrcpvq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30628' Mar 8 17:25:54.499: INFO: stderr: "I0308 17:25:54.426699 850 log.go:172] (0xc00003bce0) (0xc0008fca00) Create stream\nI0308 17:25:54.426740 850 log.go:172] (0xc00003bce0) (0xc0008fca00) Stream added, broadcasting: 1\nI0308 17:25:54.430587 850 log.go:172] (0xc00003bce0) Reply frame received for 1\nI0308 17:25:54.430617 850 log.go:172] (0xc00003bce0) (0xc0006315e0) Create stream\nI0308 17:25:54.430624 850 log.go:172] (0xc00003bce0) (0xc0006315e0) Stream added, broadcasting: 3\nI0308 17:25:54.431298 850 log.go:172] (0xc00003bce0) Reply frame received for 3\nI0308 17:25:54.431322 850 log.go:172] (0xc00003bce0) (0xc0004bea00) Create stream\nI0308 17:25:54.431332 850 log.go:172] (0xc00003bce0) (0xc0004bea00) Stream added, broadcasting: 5\nI0308 17:25:54.432017 850 log.go:172] (0xc00003bce0) Reply frame received for 5\nI0308 17:25:54.493549 850 log.go:172] (0xc00003bce0) Data frame received for 3\nI0308 17:25:54.493571 850 log.go:172] (0xc0006315e0) (3) Data frame handling\nI0308 17:25:54.493593 850 log.go:172] (0xc00003bce0) Data frame received for 5\nI0308 17:25:54.493599 850 log.go:172] (0xc0004bea00) (5) Data frame handling\nI0308 17:25:54.493606 850 log.go:172] (0xc0004bea00) (5) Data frame sent\nI0308 17:25:54.493611 850 log.go:172] (0xc00003bce0) Data frame received for 5\nI0308 17:25:54.493615 850 log.go:172] (0xc0004bea00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30628\nConnection to 172.17.0.18 30628 port [tcp/30628] succeeded!\nI0308 17:25:54.495736 850 log.go:172] (0xc00003bce0) Data frame received for 1\nI0308 17:25:54.495756 850 log.go:172] (0xc0008fca00) (1) Data frame handling\nI0308 17:25:54.495771 850 log.go:172] (0xc0008fca00) (1) Data frame sent\nI0308 17:25:54.495782 850 log.go:172] (0xc00003bce0) (0xc0008fca00) Stream removed, broadcasting: 1\nI0308 17:25:54.495834 850 log.go:172] (0xc00003bce0) Go away received\nI0308 17:25:54.496072 850 log.go:172] (0xc00003bce0) (0xc0008fca00) Stream removed, broadcasting: 1\nI0308 17:25:54.496085 850 log.go:172] (0xc00003bce0) (0xc0006315e0) Stream removed, broadcasting: 3\nI0308 17:25:54.496090 850 log.go:172] (0xc00003bce0) (0xc0004bea00) Stream removed, broadcasting: 5\n" Mar 8 17:25:54.499: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:25:54.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7210" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:6.955 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":95,"skipped":1722,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:25:54.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-bf638542-4894-4216-9097-730055ffce13 STEP: Creating a pod to test consume configMaps Mar 8 17:25:54.568: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0d4a2aa-f37f-4540-b3de-2d3e21002893" in namespace "configmap-6098" to be "Succeeded or Failed" Mar 8 17:25:54.573: INFO: Pod "pod-configmaps-c0d4a2aa-f37f-4540-b3de-2d3e21002893": Phase="Pending", Reason="", readiness=false. Elapsed: 4.659347ms Mar 8 17:25:56.577: INFO: Pod "pod-configmaps-c0d4a2aa-f37f-4540-b3de-2d3e21002893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008951762s STEP: Saw pod success Mar 8 17:25:56.577: INFO: Pod "pod-configmaps-c0d4a2aa-f37f-4540-b3de-2d3e21002893" satisfied condition "Succeeded or Failed" Mar 8 17:25:56.580: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c0d4a2aa-f37f-4540-b3de-2d3e21002893 container configmap-volume-test: STEP: delete the pod Mar 8 17:25:56.624: INFO: Waiting for pod pod-configmaps-c0d4a2aa-f37f-4540-b3de-2d3e21002893 to disappear Mar 8 17:25:56.634: INFO: Pod pod-configmaps-c0d4a2aa-f37f-4540-b3de-2d3e21002893 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:25:56.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6098" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1740,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:25:56.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0308 17:26:36.736306 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 17:26:36.736: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:26:36.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3861" for this suite. • [SLOW TEST:40.087 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":97,"skipped":1743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:26:36.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Mar 8 17:26:36.822: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 8 17:26:36.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9842' Mar 8 17:26:37.113: INFO: stderr: "" Mar 8 17:26:37.113: INFO: stdout: "service/agnhost-slave created\n" Mar 8 17:26:37.114: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 8 17:26:37.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9842' Mar 8 17:26:37.407: INFO: stderr: "" Mar 8 17:26:37.407: INFO: stdout: "service/agnhost-master created\n" Mar 8 17:26:37.407: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 8 17:26:37.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9842' Mar 8 17:26:37.669: INFO: stderr: "" Mar 8 17:26:37.669: INFO: stdout: "service/frontend created\n" Mar 8 17:26:37.669: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 8 17:26:37.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9842' Mar 8 17:26:37.944: INFO: stderr: "" Mar 8 17:26:37.944: INFO: stdout: "deployment.apps/frontend created\n" Mar 8 17:26:37.944: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 17:26:37.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9842' Mar 8 17:26:38.227: INFO: stderr: "" Mar 8 17:26:38.227: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 8 17:26:38.227: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 17:26:38.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9842' Mar 8 17:26:38.470: INFO: stderr: "" Mar 8 17:26:38.470: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 8 17:26:38.470: INFO: Waiting for all frontend pods to be Running. Mar 8 17:26:43.520: INFO: Waiting for frontend to serve content. Mar 8 17:26:43.532: INFO: Trying to add a new entry to the guestbook. Mar 8 17:26:43.544: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 8 17:26:43.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9842' Mar 8 17:26:43.716: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:26:43.716: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 8 17:26:43.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9842' Mar 8 17:26:43.869: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:26:43.869: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 17:26:43.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9842' Mar 8 17:26:44.003: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:26:44.003: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 17:26:44.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9842' Mar 8 17:26:44.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:26:44.081: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 17:26:44.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9842' Mar 8 17:26:44.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:26:44.152: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 17:26:44.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9842' Mar 8 17:26:44.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:26:44.217: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:26:44.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9842" for this suite. • [SLOW TEST:7.475 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:316 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":98,"skipped":1784,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:26:44.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-c4ae9b8b-2dab-49c4-9351-d2c07a742a4c STEP: Creating a pod to test consume secrets Mar 8 17:26:44.307: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef2843f8-6293-4a53-9fc9-f77155b66948" in namespace "projected-2137" to be "Succeeded or Failed" Mar 8 17:26:44.311: INFO: Pod "pod-projected-secrets-ef2843f8-6293-4a53-9fc9-f77155b66948": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345369ms Mar 8 17:26:46.314: INFO: Pod "pod-projected-secrets-ef2843f8-6293-4a53-9fc9-f77155b66948": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007615676s STEP: Saw pod success Mar 8 17:26:46.314: INFO: Pod "pod-projected-secrets-ef2843f8-6293-4a53-9fc9-f77155b66948" satisfied condition "Succeeded or Failed" Mar 8 17:26:46.317: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ef2843f8-6293-4a53-9fc9-f77155b66948 container projected-secret-volume-test: STEP: delete the pod Mar 8 17:26:46.673: INFO: Waiting for pod pod-projected-secrets-ef2843f8-6293-4a53-9fc9-f77155b66948 to disappear Mar 8 17:26:46.677: INFO: Pod pod-projected-secrets-ef2843f8-6293-4a53-9fc9-f77155b66948 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:26:46.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2137" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1805,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:26:46.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-a679636c-f8a4-4353-a0aa-aae6409f8fa6 STEP: Creating a pod to test consume configMaps Mar 8 17:26:46.787: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c7feabf-dda0-4ca1-bc06-e43742da13fb" in namespace "projected-3656" to be "Succeeded or Failed" Mar 8 17:26:46.791: INFO: Pod "pod-projected-configmaps-0c7feabf-dda0-4ca1-bc06-e43742da13fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.984849ms Mar 8 17:26:48.803: INFO: Pod "pod-projected-configmaps-0c7feabf-dda0-4ca1-bc06-e43742da13fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015519499s STEP: Saw pod success Mar 8 17:26:48.803: INFO: Pod "pod-projected-configmaps-0c7feabf-dda0-4ca1-bc06-e43742da13fb" satisfied condition "Succeeded or Failed" Mar 8 17:26:48.806: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-0c7feabf-dda0-4ca1-bc06-e43742da13fb container projected-configmap-volume-test: STEP: delete the pod Mar 8 17:26:48.846: INFO: Waiting for pod pod-projected-configmaps-0c7feabf-dda0-4ca1-bc06-e43742da13fb to disappear Mar 8 17:26:48.851: INFO: Pod pod-projected-configmaps-0c7feabf-dda0-4ca1-bc06-e43742da13fb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:26:48.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3656" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1815,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:26:48.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:26:48.929: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:26:55.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5154" for this suite. • [SLOW TEST:6.342 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":101,"skipped":1841,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:26:55.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1 Mar 8 17:26:55.277: INFO: Pod name my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1: Found 0 pods out of 1 Mar 8 17:27:00.280: INFO: Pod name my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1: Found 1 pods out of 1 Mar 8 17:27:00.280: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1" are running Mar 8 17:27:00.283: INFO: Pod "my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1-hgd6c" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:26:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:26:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:26:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:26:55 +0000 UTC Reason: Message:}]) Mar 8 17:27:00.283: INFO: Trying to dial the pod Mar 8 17:27:05.293: INFO: Controller my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1: Got expected result from replica 1 [my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1-hgd6c]: "my-hostname-basic-76bb31fe-f4bb-4992-af33-c26eb34224d1-hgd6c", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:27:05.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3666" for this suite. • [SLOW TEST:10.099 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":102,"skipped":1854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:27:05.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2342/configmap-test-f18ac4ed-b60d-4414-ab94-6ce418711d65 STEP: Creating a pod to test consume configMaps Mar 8 17:27:05.381: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b28fd96-ddfe-46b1-a35c-8b8fa87d3f74" in namespace "configmap-2342" to be "Succeeded or Failed" Mar 8 17:27:05.385: INFO: Pod "pod-configmaps-2b28fd96-ddfe-46b1-a35c-8b8fa87d3f74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.920121ms Mar 8 17:27:07.388: INFO: Pod "pod-configmaps-2b28fd96-ddfe-46b1-a35c-8b8fa87d3f74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007197847s STEP: Saw pod success Mar 8 17:27:07.388: INFO: Pod "pod-configmaps-2b28fd96-ddfe-46b1-a35c-8b8fa87d3f74" satisfied condition "Succeeded or Failed" Mar 8 17:27:07.390: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2b28fd96-ddfe-46b1-a35c-8b8fa87d3f74 container env-test: STEP: delete the pod Mar 8 17:27:07.427: INFO: Waiting for pod pod-configmaps-2b28fd96-ddfe-46b1-a35c-8b8fa87d3f74 to disappear Mar 8 17:27:07.515: INFO: Pod pod-configmaps-2b28fd96-ddfe-46b1-a35c-8b8fa87d3f74 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:27:07.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2342" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1894,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:27:07.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 8 17:27:07.667: INFO: Waiting up to 5m0s for pod "downward-api-a16069f2-0937-444a-8f2d-95bea49ab6d3" in namespace "downward-api-7699" to be "Succeeded or Failed" Mar 8 17:27:07.684: INFO: Pod "downward-api-a16069f2-0937-444a-8f2d-95bea49ab6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.443415ms Mar 8 17:27:09.688: INFO: Pod "downward-api-a16069f2-0937-444a-8f2d-95bea49ab6d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020391629s STEP: Saw pod success Mar 8 17:27:09.688: INFO: Pod "downward-api-a16069f2-0937-444a-8f2d-95bea49ab6d3" satisfied condition "Succeeded or Failed" Mar 8 17:27:09.690: INFO: Trying to get logs from node latest-worker pod downward-api-a16069f2-0937-444a-8f2d-95bea49ab6d3 container dapi-container: STEP: delete the pod Mar 8 17:27:09.709: INFO: Waiting for pod downward-api-a16069f2-0937-444a-8f2d-95bea49ab6d3 to disappear Mar 8 17:27:09.714: INFO: Pod downward-api-a16069f2-0937-444a-8f2d-95bea49ab6d3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:27:09.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7699" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1904,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:27:09.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 8 17:27:09.809: INFO: Waiting up to 5m0s for pod "client-containers-5d6437ae-cf69-4d1b-bb00-1639b566bc11" in namespace "containers-2747" to be "Succeeded or Failed" Mar 8 17:27:09.812: INFO: Pod "client-containers-5d6437ae-cf69-4d1b-bb00-1639b566bc11": Phase="Pending", Reason="", readiness=false. Elapsed: 3.004811ms Mar 8 17:27:11.816: INFO: Pod "client-containers-5d6437ae-cf69-4d1b-bb00-1639b566bc11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007127178s STEP: Saw pod success Mar 8 17:27:11.816: INFO: Pod "client-containers-5d6437ae-cf69-4d1b-bb00-1639b566bc11" satisfied condition "Succeeded or Failed" Mar 8 17:27:11.819: INFO: Trying to get logs from node latest-worker pod client-containers-5d6437ae-cf69-4d1b-bb00-1639b566bc11 container test-container: STEP: delete the pod Mar 8 17:27:11.847: INFO: Waiting for pod client-containers-5d6437ae-cf69-4d1b-bb00-1639b566bc11 to disappear Mar 8 17:27:11.852: INFO: Pod client-containers-5d6437ae-cf69-4d1b-bb00-1639b566bc11 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:27:11.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2747" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1923,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:27:11.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 8 17:27:12.086: INFO: Waiting up to 5m0s for pod "pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03" in namespace "emptydir-4985" to be "Succeeded or Failed" Mar 8 17:27:12.092: INFO: Pod "pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03": Phase="Pending", Reason="", readiness=false. Elapsed: 5.497316ms Mar 8 17:27:14.096: INFO: Pod "pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009695521s Mar 8 17:27:16.103: INFO: Pod "pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017323219s STEP: Saw pod success Mar 8 17:27:16.103: INFO: Pod "pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03" satisfied condition "Succeeded or Failed" Mar 8 17:27:16.107: INFO: Trying to get logs from node latest-worker pod pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03 container test-container: STEP: delete the pod Mar 8 17:27:16.122: INFO: Waiting for pod pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03 to disappear Mar 8 17:27:16.127: INFO: Pod pod-7855bfde-90b4-48ef-9c8c-466a0fb53d03 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:27:16.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4985" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:27:16.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-1d8d8358-1b87-4173-9809-ac97d66bf4fb STEP: Creating a pod to test consume secrets Mar 8 17:27:16.250: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b07a435-b07a-47ab-bb98-62db9d51b108" in namespace "projected-1189" to be "Succeeded or Failed" Mar 8 17:27:16.253: INFO: Pod "pod-projected-secrets-5b07a435-b07a-47ab-bb98-62db9d51b108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8379ms Mar 8 17:27:18.256: INFO: Pod "pod-projected-secrets-5b07a435-b07a-47ab-bb98-62db9d51b108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006386811s STEP: Saw pod success Mar 8 17:27:18.256: INFO: Pod "pod-projected-secrets-5b07a435-b07a-47ab-bb98-62db9d51b108" satisfied condition "Succeeded or Failed" Mar 8 17:27:18.259: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5b07a435-b07a-47ab-bb98-62db9d51b108 container secret-volume-test: STEP: delete the pod Mar 8 17:27:18.306: INFO: Waiting for pod pod-projected-secrets-5b07a435-b07a-47ab-bb98-62db9d51b108 to disappear Mar 8 17:27:18.308: INFO: Pod pod-projected-secrets-5b07a435-b07a-47ab-bb98-62db9d51b108 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:27:18.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1189" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1964,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:27:18.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:27:18.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0a8b89a-f713-4264-9793-b58dfb1d1d24" in namespace "downward-api-9097" to be "Succeeded or Failed" Mar 8 17:27:18.468: INFO: Pod "downwardapi-volume-f0a8b89a-f713-4264-9793-b58dfb1d1d24": Phase="Pending", Reason="", readiness=false. Elapsed: 75.413445ms Mar 8 17:27:20.471: INFO: Pod "downwardapi-volume-f0a8b89a-f713-4264-9793-b58dfb1d1d24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078699047s STEP: Saw pod success Mar 8 17:27:20.471: INFO: Pod "downwardapi-volume-f0a8b89a-f713-4264-9793-b58dfb1d1d24" satisfied condition "Succeeded or Failed" Mar 8 17:27:20.473: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f0a8b89a-f713-4264-9793-b58dfb1d1d24 container client-container: STEP: delete the pod Mar 8 17:27:20.488: INFO: Waiting for pod downwardapi-volume-f0a8b89a-f713-4264-9793-b58dfb1d1d24 to disappear Mar 8 17:27:20.493: INFO: Pod downwardapi-volume-f0a8b89a-f713-4264-9793-b58dfb1d1d24 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:27:20.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9097" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1982,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:27:20.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-253abb42-ae13-4388-a258-484c366fd033 in namespace container-probe-8578 Mar 8 17:27:22.649: INFO: Started pod busybox-253abb42-ae13-4388-a258-484c366fd033 in namespace container-probe-8578 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 17:27:22.669: INFO: Initial restart count of pod busybox-253abb42-ae13-4388-a258-484c366fd033 is 0 Mar 8 17:28:16.805: INFO: Restart count of pod container-probe-8578/busybox-253abb42-ae13-4388-a258-484c366fd033 is now 1 (54.135414373s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:28:16.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8578" for this suite. • [SLOW TEST:56.367 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1992,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:28:16.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-cbb82cea-1383-4fb0-bacd-e52f94712150 STEP: Creating a pod to test consume configMaps Mar 8 17:28:16.981: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818" in namespace "configmap-3124" to be "Succeeded or Failed" Mar 8 17:28:16.986: INFO: Pod "pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818": Phase="Pending", Reason="", readiness=false. Elapsed: 5.787857ms Mar 8 17:28:18.990: INFO: Pod "pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00958359s Mar 8 17:28:20.994: INFO: Pod "pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013631871s STEP: Saw pod success Mar 8 17:28:20.994: INFO: Pod "pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818" satisfied condition "Succeeded or Failed" Mar 8 17:28:20.998: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818 container configmap-volume-test: STEP: delete the pod Mar 8 17:28:21.055: INFO: Waiting for pod pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818 to disappear Mar 8 17:28:21.057: INFO: Pod pod-configmaps-c1ccb014-e427-4eab-bece-61e66d03e818 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:28:21.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3124" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":2007,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:28:21.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:28:21.135: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-023185e2-72f1-4e47-840a-2b5291590777" in namespace "security-context-test-7921" to be "Succeeded or Failed" Mar 8 17:28:21.141: INFO: Pod "alpine-nnp-false-023185e2-72f1-4e47-840a-2b5291590777": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493986ms Mar 8 17:28:23.145: INFO: Pod "alpine-nnp-false-023185e2-72f1-4e47-840a-2b5291590777": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010318993s Mar 8 17:28:25.149: INFO: Pod "alpine-nnp-false-023185e2-72f1-4e47-840a-2b5291590777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013887817s Mar 8 17:28:25.149: INFO: Pod "alpine-nnp-false-023185e2-72f1-4e47-840a-2b5291590777" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:28:25.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7921" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":2012,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:28:25.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:28:25.588: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:28:28.614: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:28:28.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7389-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:28:29.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4632" for this suite. STEP: Destroying namespace "webhook-4632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":112,"skipped":2016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:28:29.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9785 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9785 STEP: Creating statefulset with conflicting port in namespace statefulset-9785 STEP: Waiting until pod test-pod will start running in namespace statefulset-9785 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9785 Mar 8 17:28:31.920: INFO: Observed stateful pod in namespace: statefulset-9785, name: ss-0, uid: cc949c34-1b46-4fc3-9b98-3cd160b24ab8, status phase: Pending. Waiting for statefulset controller to delete. Mar 8 17:28:32.462: INFO: Observed stateful pod in namespace: statefulset-9785, name: ss-0, uid: cc949c34-1b46-4fc3-9b98-3cd160b24ab8, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 17:28:32.468: INFO: Observed stateful pod in namespace: statefulset-9785, name: ss-0, uid: cc949c34-1b46-4fc3-9b98-3cd160b24ab8, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 17:28:32.495: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9785 STEP: Removing pod with conflicting port in namespace statefulset-9785 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9785 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 17:28:44.602: INFO: Deleting all statefulset in ns statefulset-9785 Mar 8 17:28:44.605: INFO: Scaling statefulset ss to 0 Mar 8 17:28:54.625: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 17:28:54.629: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:28:54.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9785" for this suite. • [SLOW TEST:24.912 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":113,"skipped":2044,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:28:54.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-ab33de0c-fa7a-4cc3-ad44-8e80b937635b STEP: Creating a pod to test consume secrets Mar 8 17:28:54.772: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b32380f-7957-4ed7-8e74-51ceb3e8e365" in namespace "projected-2205" to be "Succeeded or Failed" Mar 8 17:28:54.784: INFO: Pod "pod-projected-secrets-3b32380f-7957-4ed7-8e74-51ceb3e8e365": Phase="Pending", Reason="", readiness=false. Elapsed: 11.318396ms Mar 8 17:28:56.788: INFO: Pod "pod-projected-secrets-3b32380f-7957-4ed7-8e74-51ceb3e8e365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01529626s STEP: Saw pod success Mar 8 17:28:56.788: INFO: Pod "pod-projected-secrets-3b32380f-7957-4ed7-8e74-51ceb3e8e365" satisfied condition "Succeeded or Failed" Mar 8 17:28:56.791: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-3b32380f-7957-4ed7-8e74-51ceb3e8e365 container projected-secret-volume-test: STEP: delete the pod Mar 8 17:28:56.827: INFO: Waiting for pod pod-projected-secrets-3b32380f-7957-4ed7-8e74-51ceb3e8e365 to disappear Mar 8 17:28:56.831: INFO: Pod pod-projected-secrets-3b32380f-7957-4ed7-8e74-51ceb3e8e365 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:28:56.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2205" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":2047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:28:56.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:28:57.641: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:29:00.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:29:00.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-992" for this suite. STEP: Destroying namespace "webhook-992-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":115,"skipped":2106,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:29:00.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 17:29:01.401: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 8 17:29:03.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285341, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285341, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285341, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285341, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:29:06.449: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:29:06.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:29:07.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2094" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.000 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":116,"skipped":2115,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:29:07.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Mar 8 17:29:07.905: INFO: Waiting up to 5m0s for pod "pod-855936e6-3e55-4331-844c-fd5dc3258caa" in namespace "emptydir-3936" to be "Succeeded or Failed" Mar 8 17:29:07.909: INFO: Pod "pod-855936e6-3e55-4331-844c-fd5dc3258caa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716941ms Mar 8 17:29:09.913: INFO: Pod "pod-855936e6-3e55-4331-844c-fd5dc3258caa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00785764s Mar 8 17:29:11.918: INFO: Pod "pod-855936e6-3e55-4331-844c-fd5dc3258caa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012152298s STEP: Saw pod success Mar 8 17:29:11.918: INFO: Pod "pod-855936e6-3e55-4331-844c-fd5dc3258caa" satisfied condition "Succeeded or Failed" Mar 8 17:29:11.921: INFO: Trying to get logs from node latest-worker pod pod-855936e6-3e55-4331-844c-fd5dc3258caa container test-container: STEP: delete the pod Mar 8 17:29:11.948: INFO: Waiting for pod pod-855936e6-3e55-4331-844c-fd5dc3258caa to disappear Mar 8 17:29:11.951: INFO: Pod pod-855936e6-3e55-4331-844c-fd5dc3258caa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:29:11.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3936" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2120,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:29:11.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:29:12.032: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 8 17:29:14.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 create -f -' Mar 8 17:29:17.055: INFO: stderr: "" Mar 8 17:29:17.055: INFO: stdout: "e2e-test-crd-publish-openapi-9680-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 17:29:17.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 delete e2e-test-crd-publish-openapi-9680-crds test-foo' Mar 8 17:29:17.187: INFO: stderr: "" Mar 8 17:29:17.187: INFO: stdout: "e2e-test-crd-publish-openapi-9680-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 8 17:29:17.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 apply -f -' Mar 8 17:29:17.428: INFO: stderr: "" Mar 8 17:29:17.428: INFO: stdout: "e2e-test-crd-publish-openapi-9680-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 17:29:17.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 delete e2e-test-crd-publish-openapi-9680-crds test-foo' Mar 8 17:29:17.510: INFO: stderr: "" Mar 8 17:29:17.510: INFO: stdout: "e2e-test-crd-publish-openapi-9680-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 8 17:29:17.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 create -f -' Mar 8 17:29:17.727: INFO: rc: 1 Mar 8 17:29:17.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 apply -f -' Mar 8 17:29:17.991: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 8 17:29:17.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 create -f -' Mar 8 17:29:18.227: INFO: rc: 1 Mar 8 17:29:18.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7861 apply -f -' Mar 8 17:29:18.441: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 8 17:29:18.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9680-crds' Mar 8 17:29:18.722: INFO: stderr: "" Mar 8 17:29:18.722: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9680-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 8 17:29:18.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9680-crds.metadata' Mar 8 17:29:18.985: INFO: stderr: "" Mar 8 17:29:18.985: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9680-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 8 17:29:18.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9680-crds.spec' Mar 8 17:29:19.247: INFO: stderr: "" Mar 8 17:29:19.247: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9680-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 8 17:29:19.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9680-crds.spec.bars' Mar 8 17:29:19.531: INFO: stderr: "" Mar 8 17:29:19.531: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9680-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 8 17:29:19.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9680-crds.spec.bars2' Mar 8 17:29:19.757: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:29:21.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7861" for this suite. • [SLOW TEST:9.689 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":118,"skipped":2123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:29:21.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:29:21.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5538" for this suite. STEP: Destroying namespace "nspatchtest-38c8c5e0-3fb4-466d-a9b0-f8431ae5d866-2793" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":119,"skipped":2157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:29:21.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-1c53a176-6dbf-4326-be3a-babff9f284a1 in namespace container-probe-4627 Mar 8 17:29:23.961: INFO: Started pod liveness-1c53a176-6dbf-4326-be3a-babff9f284a1 in namespace container-probe-4627 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 17:29:23.964: INFO: Initial restart count of pod liveness-1c53a176-6dbf-4326-be3a-babff9f284a1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:33:24.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4627" for this suite. • [SLOW TEST:243.146 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2184,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:33:24.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1313 STEP: creating the pod Mar 8 17:33:25.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5171' Mar 8 17:33:25.384: INFO: stderr: "" Mar 8 17:33:25.384: INFO: stdout: "pod/pause created\n" Mar 8 17:33:25.384: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 8 17:33:25.385: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5171" to be "running and ready" Mar 8 17:33:25.391: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271081ms Mar 8 17:33:27.394: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.009884187s Mar 8 17:33:27.394: INFO: Pod "pause" satisfied condition "running and ready" Mar 8 17:33:27.394: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Mar 8 17:33:27.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5171' Mar 8 17:33:27.511: INFO: stderr: "" Mar 8 17:33:27.511: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 8 17:33:27.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5171' Mar 8 17:33:27.606: INFO: stderr: "" Mar 8 17:33:27.606: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 8 17:33:27.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5171' Mar 8 17:33:27.713: INFO: stderr: "" Mar 8 17:33:27.713: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 8 17:33:27.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5171' Mar 8 17:33:27.789: INFO: stderr: "" Mar 8 17:33:27.789: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources Mar 8 17:33:27.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5171' Mar 8 17:33:27.897: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:33:27.897: INFO: stdout: "pod \"pause\" force deleted\n" Mar 8 17:33:27.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5171' Mar 8 17:33:27.970: INFO: stderr: "No resources found in kubectl-5171 namespace.\n" Mar 8 17:33:27.970: INFO: stdout: "" Mar 8 17:33:27.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5171 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 17:33:28.035: INFO: stderr: "" Mar 8 17:33:28.035: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:33:28.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5171" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":121,"skipped":2186,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:33:28.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:33:39.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3274" for this suite. • [SLOW TEST:11.108 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":122,"skipped":2187,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:33:39.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:33:50.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-487" for this suite. • [SLOW TEST:11.138 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":123,"skipped":2188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:33:50.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0308 17:33:51.475766 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 17:33:51.475: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:33:51.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8000" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":124,"skipped":2215,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:33:51.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:33:51.631: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 8 17:33:56.635: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 17:33:56.636: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 8 17:33:58.639: INFO: Creating deployment "test-rollover-deployment" Mar 8 17:33:58.649: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 8 17:34:00.656: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 8 17:34:00.663: INFO: Ensure that both replica sets have 1 created replica Mar 8 17:34:00.668: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 8 17:34:00.675: INFO: Updating deployment test-rollover-deployment Mar 8 17:34:00.675: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 8 17:34:02.715: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 8 17:34:02.722: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 8 17:34:02.727: INFO: all replica sets need to contain the pod-template-hash label Mar 8 17:34:02.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285642, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 17:34:04.734: INFO: all replica sets need to contain the pod-template-hash label Mar 8 17:34:04.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285642, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 17:34:06.735: INFO: all replica sets need to contain the pod-template-hash label Mar 8 17:34:06.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285642, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 17:34:08.735: INFO: all replica sets need to contain the pod-template-hash label Mar 8 17:34:08.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285642, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 17:34:10.735: INFO: all replica sets need to contain the pod-template-hash label Mar 8 17:34:10.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285642, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285638, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 17:34:12.734: INFO: Mar 8 17:34:12.734: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 17:34:12.741: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6240 /apis/apps/v1/namespaces/deployment-6240/deployments/test-rollover-deployment db51b6b1-ca01-4783-89ae-f3d938e244c6 53286 2 2020-03-08 17:33:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030c4e18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 17:33:58 +0000 UTC,LastTransitionTime:2020-03-08 17:33:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-03-08 17:34:12 +0000 UTC,LastTransitionTime:2020-03-08 17:33:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 17:34:12.745: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-6240 /apis/apps/v1/namespaces/deployment-6240/replicasets/test-rollover-deployment-78df7bc796 ce4ddb42-ef0c-4a58-8475-fd4e23d889da 53274 2 2020-03-08 17:34:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment db51b6b1-ca01-4783-89ae-f3d938e244c6 0xc0030c52e7 0xc0030c52e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030c5358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:34:12.745: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 8 17:34:12.745: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6240 /apis/apps/v1/namespaces/deployment-6240/replicasets/test-rollover-controller 95135253-f48c-48b9-8f21-4bcb47bfe597 53284 2 2020-03-08 17:33:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment db51b6b1-ca01-4783-89ae-f3d938e244c6 0xc0030c5217 0xc0030c5218}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0030c5278 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:34:12.745: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6240 /apis/apps/v1/namespaces/deployment-6240/replicasets/test-rollover-deployment-f6c94f66c 9d8ecd6b-6323-4d10-b47b-04127f5a6048 53228 2 2020-03-08 17:33:58 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment db51b6b1-ca01-4783-89ae-f3d938e244c6 0xc0030c53c0 0xc0030c53c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030c5438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:34:12.750: INFO: Pod "test-rollover-deployment-78df7bc796-nqhx2" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-nqhx2 test-rollover-deployment-78df7bc796- deployment-6240 /api/v1/namespaces/deployment-6240/pods/test-rollover-deployment-78df7bc796-nqhx2 efc6f608-1648-4e4f-9946-3930a20b941b 53243 0 2020-03-08 17:34:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 ce4ddb42-ef0c-4a58-8475-fd4e23d889da 0xc002876907 0xc002876908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ch7c2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ch7c2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ch7c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:34:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:34:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:34:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:34:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.239,StartTime:2020-03-08 17:34:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:34:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://c5b4b36f2b963b122c70b1d440c1bca07e1c29654a05388099b8b10843ff1462,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:34:12.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6240" for this suite. • [SLOW TEST:21.276 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":125,"skipped":2219,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:34:12.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-31a1c222-549c-4f07-ad12-0dae48188f87 STEP: Creating a pod to test consume secrets Mar 8 17:34:12.844: INFO: Waiting up to 5m0s for pod "pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a" in namespace "secrets-7646" to be "Succeeded or Failed" Mar 8 17:34:12.859: INFO: Pod "pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.252032ms Mar 8 17:34:14.863: INFO: Pod "pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019035522s Mar 8 17:34:16.866: INFO: Pod "pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022657752s STEP: Saw pod success Mar 8 17:34:16.867: INFO: Pod "pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a" satisfied condition "Succeeded or Failed" Mar 8 17:34:16.869: INFO: Trying to get logs from node latest-worker pod pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a container secret-volume-test: STEP: delete the pod Mar 8 17:34:16.931: INFO: Waiting for pod pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a to disappear Mar 8 17:34:16.937: INFO: Pod pod-secrets-d7451d9c-a0b3-4484-8c21-e173097fd35a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:34:16.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7646" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:34:16.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:34:17.022: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 17:34:20.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8736 create -f -' Mar 8 17:34:22.734: INFO: stderr: "" Mar 8 17:34:22.734: INFO: stdout: "e2e-test-crd-publish-openapi-4472-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 17:34:22.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8736 delete e2e-test-crd-publish-openapi-4472-crds test-cr' Mar 8 17:34:22.863: INFO: stderr: "" Mar 8 17:34:22.863: INFO: stdout: "e2e-test-crd-publish-openapi-4472-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 8 17:34:22.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8736 apply -f -' Mar 8 17:34:23.108: INFO: stderr: "" Mar 8 17:34:23.108: INFO: stdout: "e2e-test-crd-publish-openapi-4472-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 17:34:23.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8736 delete e2e-test-crd-publish-openapi-4472-crds test-cr' Mar 8 17:34:23.231: INFO: stderr: "" Mar 8 17:34:23.231: INFO: stdout: "e2e-test-crd-publish-openapi-4472-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 17:34:23.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4472-crds' Mar 8 17:34:23.431: INFO: stderr: "" Mar 8 17:34:23.431: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4472-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:34:26.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8736" for this suite. • [SLOW TEST:9.457 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":127,"skipped":2265,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:34:26.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:34:26.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5591" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":128,"skipped":2275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:34:26.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:34:26.575: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Pending, waiting for it to be Running (with Ready = true) Mar 8 17:34:28.579: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:30.580: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:32.579: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:34.578: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:36.579: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:38.599: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:40.584: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:42.579: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:44.579: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:46.580: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:48.579: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = false) Mar 8 17:34:50.578: INFO: The status of Pod test-webserver-fd75db75-05c8-4121-89df-807ad418cbb3 is Running (Ready = true) Mar 8 17:34:50.580: INFO: Container started at 2020-03-08 17:34:27 +0000 UTC, pod became ready at 2020-03-08 17:34:49 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:34:50.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9432" for this suite. • [SLOW TEST:24.074 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2333,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:34:50.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1561 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 17:34:50.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3304' Mar 8 17:34:50.722: INFO: stderr: "" Mar 8 17:34:50.722: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 8 17:34:55.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3304 -o json' Mar 8 17:34:55.888: INFO: stderr: "" Mar 8 17:34:55.888: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-08T17:34:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3304\",\n \"resourceVersion\": \"53522\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3304/pods/e2e-test-httpd-pod\",\n \"uid\": \"8a09c38e-589a-4da1-97d6-c0b760382337\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-x5j8r\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-x5j8r\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-x5j8r\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T17:34:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T17:34:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T17:34:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T17:34:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b35387aaee8184555397430b2c2266a51fab923b0e7790ab3df839e3d3ed414a\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-08T17:34:52Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.242\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.242\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-08T17:34:50Z\"\n }\n}\n" STEP: replace the image in the pod Mar 8 17:34:55.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3304' Mar 8 17:34:56.201: INFO: stderr: "" Mar 8 17:34:56.202: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1566 Mar 8 17:34:56.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3304' Mar 8 17:35:02.481: INFO: stderr: "" Mar 8 17:35:02.481: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:35:02.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3304" for this suite. • [SLOW TEST:11.902 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":130,"skipped":2338,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:35:02.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:35:02.533: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:35:04.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7504" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:35:04.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 8 17:35:04.640: INFO: >>> kubeConfig: /root/.kube/config Mar 8 17:35:06.476: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:35:15.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2894" for this suite. • [SLOW TEST:11.095 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":132,"skipped":2369,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:35:15.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 8 17:35:18.277: INFO: Successfully updated pod "labelsupdate8927002d-aa0a-4817-a909-118d6135d91d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:35:22.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7516" for this suite. • [SLOW TEST:6.628 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2370,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:35:22.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-1893027a-52a1-423d-8f07-4c56d61c1bb9 in namespace container-probe-6682 Mar 8 17:35:24.412: INFO: Started pod liveness-1893027a-52a1-423d-8f07-4c56d61c1bb9 in namespace container-probe-6682 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 17:35:24.415: INFO: Initial restart count of pod liveness-1893027a-52a1-423d-8f07-4c56d61c1bb9 is 0 Mar 8 17:35:42.452: INFO: Restart count of pod container-probe-6682/liveness-1893027a-52a1-423d-8f07-4c56d61c1bb9 is now 1 (18.037197837s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:35:42.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6682" for this suite. • [SLOW TEST:20.195 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:35:42.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:35:42.582: INFO: Creating deployment "test-recreate-deployment" Mar 8 17:35:42.585: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 8 17:35:42.608: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 8 17:35:44.616: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 8 17:35:44.619: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 8 17:35:44.625: INFO: Updating deployment test-recreate-deployment Mar 8 17:35:44.625: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 17:35:44.849: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8486 /apis/apps/v1/namespaces/deployment-8486/deployments/test-recreate-deployment f27d155c-0236-4ca9-b224-1dcbde601c90 53853 2 2020-03-08 17:35:42 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047f6d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 17:35:44 +0000 UTC,LastTransitionTime:2020-03-08 17:35:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-08 17:35:44 +0000 UTC,LastTransitionTime:2020-03-08 17:35:42 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 8 17:35:44.853: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-8486 /apis/apps/v1/namespaces/deployment-8486/replicasets/test-recreate-deployment-5f94c574ff 93fac36b-d80b-4d25-81ed-c3a5cead5c63 53852 1 2020-03-08 17:35:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f27d155c-0236-4ca9-b224-1dcbde601c90 0xc0047f71d7 0xc0047f71d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047f7258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:35:44.853: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 8 17:35:44.853: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-8486 /apis/apps/v1/namespaces/deployment-8486/replicasets/test-recreate-deployment-846c7dd955 76c712e9-72d4-4188-9775-2eec42863b0c 53839 2 2020-03-08 17:35:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f27d155c-0236-4ca9-b224-1dcbde601c90 0xc0047f7317 0xc0047f7318}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047f73b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:35:44.856: INFO: Pod "test-recreate-deployment-5f94c574ff-99shx" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-99shx test-recreate-deployment-5f94c574ff- deployment-8486 /api/v1/namespaces/deployment-8486/pods/test-recreate-deployment-5f94c574ff-99shx b0efc0c2-2503-4369-b28e-30d1dfba0bb0 53855 0 2020-03-08 17:35:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 93fac36b-d80b-4d25-81ed-c3a5cead5c63 0xc003a24057 0xc003a24058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-26cpd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-26cpd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-26cpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:35:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:35:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:35:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:35:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:35:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:35:44.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8486" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":135,"skipped":2407,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:35:44.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 8 17:35:44.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4465' Mar 8 17:35:45.228: INFO: stderr: "" Mar 8 17:35:45.228: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 17:35:45.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:35:45.320: INFO: stderr: "" Mar 8 17:35:45.320: INFO: stdout: "update-demo-nautilus-4s49s update-demo-nautilus-b42kd " Mar 8 17:35:45.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4s49s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:35:45.382: INFO: stderr: "" Mar 8 17:35:45.382: INFO: stdout: "" Mar 8 17:35:45.382: INFO: update-demo-nautilus-4s49s is created but not running Mar 8 17:35:50.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:35:50.493: INFO: stderr: "" Mar 8 17:35:50.493: INFO: stdout: "update-demo-nautilus-4s49s update-demo-nautilus-b42kd " Mar 8 17:35:50.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4s49s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:35:50.587: INFO: stderr: "" Mar 8 17:35:50.587: INFO: stdout: "true" Mar 8 17:35:50.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4s49s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:35:50.666: INFO: stderr: "" Mar 8 17:35:50.666: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 17:35:50.666: INFO: validating pod update-demo-nautilus-4s49s Mar 8 17:35:50.669: INFO: got data: { "image": "nautilus.jpg" } Mar 8 17:35:50.669: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 17:35:50.669: INFO: update-demo-nautilus-4s49s is verified up and running Mar 8 17:35:50.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:35:50.736: INFO: stderr: "" Mar 8 17:35:50.736: INFO: stdout: "true" Mar 8 17:35:50.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:35:50.796: INFO: stderr: "" Mar 8 17:35:50.796: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 17:35:50.796: INFO: validating pod update-demo-nautilus-b42kd Mar 8 17:35:50.799: INFO: got data: { "image": "nautilus.jpg" } Mar 8 17:35:50.799: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 17:35:50.799: INFO: update-demo-nautilus-b42kd is verified up and running STEP: scaling down the replication controller Mar 8 17:35:50.801: INFO: scanned /root for discovery docs: Mar 8 17:35:50.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4465' Mar 8 17:35:51.914: INFO: stderr: "" Mar 8 17:35:51.914: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 17:35:51.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:35:52.027: INFO: stderr: "" Mar 8 17:35:52.027: INFO: stdout: "update-demo-nautilus-4s49s update-demo-nautilus-b42kd " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 17:35:57.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:35:57.147: INFO: stderr: "" Mar 8 17:35:57.147: INFO: stdout: "update-demo-nautilus-4s49s update-demo-nautilus-b42kd " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 17:36:02.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:36:02.280: INFO: stderr: "" Mar 8 17:36:02.280: INFO: stdout: "update-demo-nautilus-4s49s update-demo-nautilus-b42kd " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 17:36:07.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:36:07.375: INFO: stderr: "" Mar 8 17:36:07.375: INFO: stdout: "update-demo-nautilus-b42kd " Mar 8 17:36:07.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:07.466: INFO: stderr: "" Mar 8 17:36:07.466: INFO: stdout: "true" Mar 8 17:36:07.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:07.556: INFO: stderr: "" Mar 8 17:36:07.556: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 17:36:07.556: INFO: validating pod update-demo-nautilus-b42kd Mar 8 17:36:07.559: INFO: got data: { "image": "nautilus.jpg" } Mar 8 17:36:07.559: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 17:36:07.559: INFO: update-demo-nautilus-b42kd is verified up and running STEP: scaling up the replication controller Mar 8 17:36:07.561: INFO: scanned /root for discovery docs: Mar 8 17:36:07.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4465' Mar 8 17:36:08.669: INFO: stderr: "" Mar 8 17:36:08.669: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 17:36:08.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:36:08.779: INFO: stderr: "" Mar 8 17:36:08.779: INFO: stdout: "update-demo-nautilus-b42kd update-demo-nautilus-dbvvs " Mar 8 17:36:08.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:08.860: INFO: stderr: "" Mar 8 17:36:08.860: INFO: stdout: "true" Mar 8 17:36:08.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:08.927: INFO: stderr: "" Mar 8 17:36:08.927: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 17:36:08.927: INFO: validating pod update-demo-nautilus-b42kd Mar 8 17:36:08.930: INFO: got data: { "image": "nautilus.jpg" } Mar 8 17:36:08.930: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 17:36:08.930: INFO: update-demo-nautilus-b42kd is verified up and running Mar 8 17:36:08.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbvvs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:09.007: INFO: stderr: "" Mar 8 17:36:09.007: INFO: stdout: "" Mar 8 17:36:09.007: INFO: update-demo-nautilus-dbvvs is created but not running Mar 8 17:36:14.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4465' Mar 8 17:36:14.133: INFO: stderr: "" Mar 8 17:36:14.133: INFO: stdout: "update-demo-nautilus-b42kd update-demo-nautilus-dbvvs " Mar 8 17:36:14.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:14.236: INFO: stderr: "" Mar 8 17:36:14.236: INFO: stdout: "true" Mar 8 17:36:14.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b42kd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:14.311: INFO: stderr: "" Mar 8 17:36:14.311: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 17:36:14.311: INFO: validating pod update-demo-nautilus-b42kd Mar 8 17:36:14.314: INFO: got data: { "image": "nautilus.jpg" } Mar 8 17:36:14.314: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 17:36:14.314: INFO: update-demo-nautilus-b42kd is verified up and running Mar 8 17:36:14.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbvvs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:14.384: INFO: stderr: "" Mar 8 17:36:14.384: INFO: stdout: "true" Mar 8 17:36:14.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dbvvs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4465' Mar 8 17:36:14.460: INFO: stderr: "" Mar 8 17:36:14.460: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 17:36:14.460: INFO: validating pod update-demo-nautilus-dbvvs Mar 8 17:36:14.464: INFO: got data: { "image": "nautilus.jpg" } Mar 8 17:36:14.464: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 17:36:14.464: INFO: update-demo-nautilus-dbvvs is verified up and running STEP: using delete to clean up resources Mar 8 17:36:14.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4465' Mar 8 17:36:14.540: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 17:36:14.540: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 17:36:14.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4465' Mar 8 17:36:14.613: INFO: stderr: "No resources found in kubectl-4465 namespace.\n" Mar 8 17:36:14.613: INFO: stdout: "" Mar 8 17:36:14.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4465 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 17:36:14.693: INFO: stderr: "" Mar 8 17:36:14.693: INFO: stdout: "update-demo-nautilus-b42kd\nupdate-demo-nautilus-dbvvs\n" Mar 8 17:36:15.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4465' Mar 8 17:36:15.304: INFO: stderr: "No resources found in kubectl-4465 namespace.\n" Mar 8 17:36:15.304: INFO: stdout: "" Mar 8 17:36:15.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4465 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 17:36:15.385: INFO: stderr: "" Mar 8 17:36:15.385: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:36:15.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4465" for this suite. • [SLOW TEST:30.527 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":136,"skipped":2413,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:36:15.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:36:16.524: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:36:18.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285776, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285776, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285776, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285776, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:36:21.580: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:36:21.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8417" for this suite. STEP: Destroying namespace "webhook-8417-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.452 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":137,"skipped":2416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:36:21.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 8 17:36:21.936: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:36:26.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7410" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":138,"skipped":2477,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:36:26.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Mar 8 17:36:26.490: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix926790434/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:36:26.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6459" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":139,"skipped":2486,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:36:26.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:36:39.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6646" for this suite. • [SLOW TEST:13.157 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":140,"skipped":2488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:36:39.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-757 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 17:36:39.783: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 17:36:39.801: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 17:36:41.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:36:43.823: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:36:45.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:36:47.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:36:49.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:36:51.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:36:53.806: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 17:36:53.810: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 17:36:55.814: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 17:36:57.814: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 8 17:36:59.815: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 17:37:01.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:8080/dial?request=hostname&protocol=http&host=10.244.1.252&port=8080&tries=1'] Namespace:pod-network-test-757 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:37:01.838: INFO: >>> kubeConfig: /root/.kube/config I0308 17:37:01.877677 7 log.go:172] (0xc002be2c60) (0xc0016d8960) Create stream I0308 17:37:01.877720 7 log.go:172] (0xc002be2c60) (0xc0016d8960) Stream added, broadcasting: 1 I0308 17:37:01.880249 7 log.go:172] (0xc002be2c60) Reply frame received for 1 I0308 17:37:01.880288 7 log.go:172] (0xc002be2c60) (0xc001c64000) Create stream I0308 17:37:01.880305 7 log.go:172] (0xc002be2c60) (0xc001c64000) Stream added, broadcasting: 3 I0308 17:37:01.881185 7 log.go:172] (0xc002be2c60) Reply frame received for 3 I0308 17:37:01.881233 7 log.go:172] (0xc002be2c60) (0xc001be7860) Create stream I0308 17:37:01.881250 7 log.go:172] (0xc002be2c60) (0xc001be7860) Stream added, broadcasting: 5 I0308 17:37:01.882049 7 log.go:172] (0xc002be2c60) Reply frame received for 5 I0308 17:37:01.951118 7 log.go:172] (0xc002be2c60) Data frame received for 3 I0308 17:37:01.951154 7 log.go:172] (0xc001c64000) (3) Data frame handling I0308 17:37:01.951178 7 log.go:172] (0xc001c64000) (3) Data frame sent I0308 17:37:01.951311 7 log.go:172] (0xc002be2c60) Data frame received for 5 I0308 17:37:01.951345 7 log.go:172] (0xc001be7860) (5) Data frame handling I0308 17:37:01.951598 7 log.go:172] (0xc002be2c60) Data frame received for 3 I0308 17:37:01.951616 7 log.go:172] (0xc001c64000) (3) Data frame handling I0308 17:37:01.953306 7 log.go:172] (0xc002be2c60) Data frame received for 1 I0308 17:37:01.953325 7 log.go:172] (0xc0016d8960) (1) Data frame handling I0308 17:37:01.953335 7 log.go:172] (0xc0016d8960) (1) Data frame sent I0308 17:37:01.953348 7 log.go:172] (0xc002be2c60) (0xc0016d8960) Stream removed, broadcasting: 1 I0308 17:37:01.953408 7 log.go:172] (0xc002be2c60) Go away received I0308 17:37:01.953442 7 log.go:172] (0xc002be2c60) (0xc0016d8960) Stream removed, broadcasting: 1 I0308 17:37:01.953465 7 log.go:172] (0xc002be2c60) (0xc001c64000) Stream removed, broadcasting: 3 I0308 17:37:01.953482 7 log.go:172] (0xc002be2c60) (0xc001be7860) Stream removed, broadcasting: 5 Mar 8 17:37:01.953: INFO: Waiting for responses: map[] Mar 8 17:37:01.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:8080/dial?request=hostname&protocol=http&host=10.244.2.204&port=8080&tries=1'] Namespace:pod-network-test-757 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:37:01.974: INFO: >>> kubeConfig: /root/.kube/config I0308 17:37:02.008143 7 log.go:172] (0xc002cb66e0) (0xc000bda320) Create stream I0308 17:37:02.008169 7 log.go:172] (0xc002cb66e0) (0xc000bda320) Stream added, broadcasting: 1 I0308 17:37:02.010720 7 log.go:172] (0xc002cb66e0) Reply frame received for 1 I0308 17:37:02.010769 7 log.go:172] (0xc002cb66e0) (0xc002d295e0) Create stream I0308 17:37:02.010782 7 log.go:172] (0xc002cb66e0) (0xc002d295e0) Stream added, broadcasting: 3 I0308 17:37:02.011759 7 log.go:172] (0xc002cb66e0) Reply frame received for 3 I0308 17:37:02.011795 7 log.go:172] (0xc002cb66e0) (0xc000bda3c0) Create stream I0308 17:37:02.011808 7 log.go:172] (0xc002cb66e0) (0xc000bda3c0) Stream added, broadcasting: 5 I0308 17:37:02.012740 7 log.go:172] (0xc002cb66e0) Reply frame received for 5 I0308 17:37:02.064827 7 log.go:172] (0xc002cb66e0) Data frame received for 3 I0308 17:37:02.064846 7 log.go:172] (0xc002d295e0) (3) Data frame handling I0308 17:37:02.064861 7 log.go:172] (0xc002d295e0) (3) Data frame sent I0308 17:37:02.065170 7 log.go:172] (0xc002cb66e0) Data frame received for 3 I0308 17:37:02.065192 7 log.go:172] (0xc002d295e0) (3) Data frame handling I0308 17:37:02.065469 7 log.go:172] (0xc002cb66e0) Data frame received for 5 I0308 17:37:02.065482 7 log.go:172] (0xc000bda3c0) (5) Data frame handling I0308 17:37:02.066453 7 log.go:172] (0xc002cb66e0) Data frame received for 1 I0308 17:37:02.066471 7 log.go:172] (0xc000bda320) (1) Data frame handling I0308 17:37:02.066487 7 log.go:172] (0xc000bda320) (1) Data frame sent I0308 17:37:02.066504 7 log.go:172] (0xc002cb66e0) (0xc000bda320) Stream removed, broadcasting: 1 I0308 17:37:02.066526 7 log.go:172] (0xc002cb66e0) Go away received I0308 17:37:02.066663 7 log.go:172] (0xc002cb66e0) (0xc000bda320) Stream removed, broadcasting: 1 I0308 17:37:02.066692 7 log.go:172] (0xc002cb66e0) (0xc002d295e0) Stream removed, broadcasting: 3 I0308 17:37:02.066701 7 log.go:172] (0xc002cb66e0) (0xc000bda3c0) Stream removed, broadcasting: 5 Mar 8 17:37:02.066: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:37:02.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-757" for this suite. • [SLOW TEST:22.357 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:37:02.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:37:02.121: INFO: Creating ReplicaSet my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2 Mar 8 17:37:02.142: INFO: Pod name my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2: Found 0 pods out of 1 Mar 8 17:37:07.155: INFO: Pod name my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2: Found 1 pods out of 1 Mar 8 17:37:07.155: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2" is running Mar 8 17:37:07.161: INFO: Pod "my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2-pnv5h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:37:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:37:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:37:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 17:37:02 +0000 UTC Reason: Message:}]) Mar 8 17:37:07.161: INFO: Trying to dial the pod Mar 8 17:37:12.190: INFO: Controller my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2: Got expected result from replica 1 [my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2-pnv5h]: "my-hostname-basic-044f6e10-77d5-4a15-885a-f1fe7dc7d2c2-pnv5h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:37:12.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3261" for this suite. • [SLOW TEST:10.121 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":142,"skipped":2594,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:37:12.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:37:12.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b4edbc4-dda5-4b4b-a9dc-5a5947255b40" in namespace "projected-6152" to be "Succeeded or Failed" Mar 8 17:37:12.256: INFO: Pod "downwardapi-volume-3b4edbc4-dda5-4b4b-a9dc-5a5947255b40": Phase="Pending", Reason="", readiness=false. Elapsed: 3.11156ms Mar 8 17:37:14.260: INFO: Pod "downwardapi-volume-3b4edbc4-dda5-4b4b-a9dc-5a5947255b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006759587s STEP: Saw pod success Mar 8 17:37:14.260: INFO: Pod "downwardapi-volume-3b4edbc4-dda5-4b4b-a9dc-5a5947255b40" satisfied condition "Succeeded or Failed" Mar 8 17:37:14.263: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3b4edbc4-dda5-4b4b-a9dc-5a5947255b40 container client-container: STEP: delete the pod Mar 8 17:37:14.347: INFO: Waiting for pod downwardapi-volume-3b4edbc4-dda5-4b4b-a9dc-5a5947255b40 to disappear Mar 8 17:37:14.354: INFO: Pod downwardapi-volume-3b4edbc4-dda5-4b4b-a9dc-5a5947255b40 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:37:14.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6152" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2598,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:37:14.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-66146526-e6c4-4624-a69e-26cca01d4ba8 STEP: Creating a pod to test consume secrets Mar 8 17:37:14.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4" in namespace "projected-292" to be "Succeeded or Failed" Mar 8 17:37:14.444: INFO: Pod "pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446416ms Mar 8 17:37:16.448: INFO: Pod "pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008765394s Mar 8 17:37:18.451: INFO: Pod "pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01198029s STEP: Saw pod success Mar 8 17:37:18.451: INFO: Pod "pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4" satisfied condition "Succeeded or Failed" Mar 8 17:37:18.454: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4 container projected-secret-volume-test: STEP: delete the pod Mar 8 17:37:18.489: INFO: Waiting for pod pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4 to disappear Mar 8 17:37:18.492: INFO: Pod pod-projected-secrets-bb0868ff-17b0-4624-b810-17858f1682c4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:37:18.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-292" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2599,"failed":0} ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:37:18.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-07d6456e-03c9-40e3-ae06-10c0977ebb77 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-07d6456e-03c9-40e3-ae06-10c0977ebb77 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:37:22.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2192" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2599,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:37:22.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 17:37:22.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:22.772: INFO: Number of nodes with available pods: 0 Mar 8 17:37:22.772: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:37:23.776: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:23.779: INFO: Number of nodes with available pods: 0 Mar 8 17:37:23.779: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:37:24.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:24.787: INFO: Number of nodes with available pods: 2 Mar 8 17:37:24.787: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 8 17:37:24.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:24.827: INFO: Number of nodes with available pods: 1 Mar 8 17:37:24.827: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 17:37:25.831: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:25.834: INFO: Number of nodes with available pods: 1 Mar 8 17:37:25.834: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 17:37:26.832: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:26.835: INFO: Number of nodes with available pods: 1 Mar 8 17:37:26.835: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 17:37:27.830: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:27.836: INFO: Number of nodes with available pods: 2 Mar 8 17:37:27.836: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8974, will wait for the garbage collector to delete the pods Mar 8 17:37:27.900: INFO: Deleting DaemonSet.extensions daemon-set took: 5.4344ms Mar 8 17:37:28.200: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239971ms Mar 8 17:37:42.504: INFO: Number of nodes with available pods: 0 Mar 8 17:37:42.504: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 17:37:42.506: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8974/daemonsets","resourceVersion":"54692"},"items":null} Mar 8 17:37:42.508: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8974/pods","resourceVersion":"54692"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:37:42.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8974" for this suite. • [SLOW TEST:19.909 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":146,"skipped":2600,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:37:42.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:37:42.614: INFO: Create a RollingUpdate DaemonSet Mar 8 17:37:42.617: INFO: Check that daemon pods launch on every node of the cluster Mar 8 17:37:42.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:42.624: INFO: Number of nodes with available pods: 0 Mar 8 17:37:42.624: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:37:43.628: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:43.656: INFO: Number of nodes with available pods: 0 Mar 8 17:37:43.656: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:37:44.629: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:44.632: INFO: Number of nodes with available pods: 2 Mar 8 17:37:44.632: INFO: Number of running nodes: 2, number of available pods: 2 Mar 8 17:37:44.632: INFO: Update the DaemonSet to trigger a rollout Mar 8 17:37:44.640: INFO: Updating DaemonSet daemon-set Mar 8 17:37:48.676: INFO: Roll back the DaemonSet before rollout is complete Mar 8 17:37:48.681: INFO: Updating DaemonSet daemon-set Mar 8 17:37:48.681: INFO: Make sure DaemonSet rollback is complete Mar 8 17:37:48.684: INFO: Wrong image for pod: daemon-set-6rntl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 17:37:48.684: INFO: Pod daemon-set-6rntl is not available Mar 8 17:37:48.690: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:49.705: INFO: Wrong image for pod: daemon-set-6rntl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 17:37:49.705: INFO: Pod daemon-set-6rntl is not available Mar 8 17:37:49.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:50.695: INFO: Wrong image for pod: daemon-set-6rntl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 17:37:50.695: INFO: Pod daemon-set-6rntl is not available Mar 8 17:37:50.698: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:37:51.705: INFO: Pod daemon-set-2nn89 is not available Mar 8 17:37:51.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2401, will wait for the garbage collector to delete the pods Mar 8 17:37:51.772: INFO: Deleting DaemonSet.extensions daemon-set took: 4.518689ms Mar 8 17:37:51.873: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.210531ms Mar 8 17:38:02.577: INFO: Number of nodes with available pods: 0 Mar 8 17:38:02.577: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 17:38:02.579: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2401/daemonsets","resourceVersion":"54854"},"items":null} Mar 8 17:38:02.582: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2401/pods","resourceVersion":"54854"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:02.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2401" for this suite. • [SLOW TEST:20.073 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":147,"skipped":2618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:02.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Mar 8 17:38:02.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config api-versions' Mar 8 17:38:02.868: INFO: stderr: "" Mar 8 17:38:02.868: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:02.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1709" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":148,"skipped":2651,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:02.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:05.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9868" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2672,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:05.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:21.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2476" for this suite. • [SLOW TEST:16.245 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":150,"skipped":2680,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:21.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 8 17:38:21.368: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 8 17:38:26.375: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:26.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5835" for this suite. • [SLOW TEST:5.223 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":151,"skipped":2694,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:26.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:38:27.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:38:30.125: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 8 17:38:30.152: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:30.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3616" for this suite. STEP: Destroying namespace "webhook-3616-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":152,"skipped":2711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:30.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:38:30.710: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:38:32.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285910, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285910, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285910, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719285910, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:38:35.795: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:36.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4765" for this suite. STEP: Destroying namespace "webhook-4765-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.093 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":153,"skipped":2748,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:36.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:52.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8822" for this suite. • [SLOW TEST:16.204 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":154,"skipped":2757,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:52.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 8 17:38:57.172: INFO: Successfully updated pod "labelsupdated9589275-85da-4a62-8f8a-4e95d1ae7772" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:38:59.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8715" for this suite. • [SLOW TEST:6.652 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2764,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:38:59.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-186d4778-ad63-4ddc-8f2c-73ceed3c909d STEP: Creating secret with name secret-projected-all-test-volume-bb88e568-1ac6-4e5f-afe9-d4c330457ada STEP: Creating a pod to test Check all projections for projected volume plugin Mar 8 17:38:59.316: INFO: Waiting up to 5m0s for pod "projected-volume-505f29c0-b904-44f8-8c08-672166e41c5a" in namespace "projected-2836" to be "Succeeded or Failed" Mar 8 17:38:59.320: INFO: Pod "projected-volume-505f29c0-b904-44f8-8c08-672166e41c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362104ms Mar 8 17:39:01.323: INFO: Pod "projected-volume-505f29c0-b904-44f8-8c08-672166e41c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007440227s STEP: Saw pod success Mar 8 17:39:01.323: INFO: Pod "projected-volume-505f29c0-b904-44f8-8c08-672166e41c5a" satisfied condition "Succeeded or Failed" Mar 8 17:39:01.325: INFO: Trying to get logs from node latest-worker pod projected-volume-505f29c0-b904-44f8-8c08-672166e41c5a container projected-all-volume-test: STEP: delete the pod Mar 8 17:39:01.350: INFO: Waiting for pod projected-volume-505f29c0-b904-44f8-8c08-672166e41c5a to disappear Mar 8 17:39:01.356: INFO: Pod projected-volume-505f29c0-b904-44f8-8c08-672166e41c5a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:39:01.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2836" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2770,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:39:01.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 17:39:05.964: INFO: Successfully updated pod "pod-update-activedeadlineseconds-37bef339-6c46-4274-961d-60cdbbf24b6b" Mar 8 17:39:05.964: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-37bef339-6c46-4274-961d-60cdbbf24b6b" in namespace "pods-5901" to be "terminated due to deadline exceeded" Mar 8 17:39:05.982: INFO: Pod "pod-update-activedeadlineseconds-37bef339-6c46-4274-961d-60cdbbf24b6b": Phase="Running", Reason="", readiness=true. Elapsed: 17.785823ms Mar 8 17:39:08.023: INFO: Pod "pod-update-activedeadlineseconds-37bef339-6c46-4274-961d-60cdbbf24b6b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.058912961s Mar 8 17:39:08.023: INFO: Pod "pod-update-activedeadlineseconds-37bef339-6c46-4274-961d-60cdbbf24b6b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:39:08.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5901" for this suite. • [SLOW TEST:6.664 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2772,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:39:08.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 8 17:39:09.238: INFO: Pod name wrapped-volume-race-4eb3f303-06e9-47da-af17-e3501f6ce002: Found 0 pods out of 5 Mar 8 17:39:14.325: INFO: Pod name wrapped-volume-race-4eb3f303-06e9-47da-af17-e3501f6ce002: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4eb3f303-06e9-47da-af17-e3501f6ce002 in namespace emptydir-wrapper-4179, will wait for the garbage collector to delete the pods Mar 8 17:39:24.403: INFO: Deleting ReplicationController wrapped-volume-race-4eb3f303-06e9-47da-af17-e3501f6ce002 took: 6.594939ms Mar 8 17:39:24.704: INFO: Terminating ReplicationController wrapped-volume-race-4eb3f303-06e9-47da-af17-e3501f6ce002 pods took: 300.281333ms STEP: Creating RC which spawns configmap-volume pods Mar 8 17:39:32.631: INFO: Pod name wrapped-volume-race-c301c71a-5479-4c5a-859d-595cc796d757: Found 0 pods out of 5 Mar 8 17:39:37.636: INFO: Pod name wrapped-volume-race-c301c71a-5479-4c5a-859d-595cc796d757: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c301c71a-5479-4c5a-859d-595cc796d757 in namespace emptydir-wrapper-4179, will wait for the garbage collector to delete the pods Mar 8 17:39:49.752: INFO: Deleting ReplicationController wrapped-volume-race-c301c71a-5479-4c5a-859d-595cc796d757 took: 25.45387ms Mar 8 17:39:50.052: INFO: Terminating ReplicationController wrapped-volume-race-c301c71a-5479-4c5a-859d-595cc796d757 pods took: 300.259557ms STEP: Creating RC which spawns configmap-volume pods Mar 8 17:39:55.479: INFO: Pod name wrapped-volume-race-423162b5-1dbe-4c04-8b3f-260971cf6046: Found 0 pods out of 5 Mar 8 17:40:00.494: INFO: Pod name wrapped-volume-race-423162b5-1dbe-4c04-8b3f-260971cf6046: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-423162b5-1dbe-4c04-8b3f-260971cf6046 in namespace emptydir-wrapper-4179, will wait for the garbage collector to delete the pods Mar 8 17:40:12.631: INFO: Deleting ReplicationController wrapped-volume-race-423162b5-1dbe-4c04-8b3f-260971cf6046 took: 7.146868ms Mar 8 17:40:13.432: INFO: Terminating ReplicationController wrapped-volume-race-423162b5-1dbe-4c04-8b3f-260971cf6046 pods took: 800.285251ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:40:24.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4179" for this suite. • [SLOW TEST:76.136 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":158,"skipped":2779,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:40:24.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:40:24.901: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:40:26.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286024, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286024, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286024, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286024, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:40:29.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 8 17:40:34.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config attach --namespace=webhook-77 to-be-attached-pod -i -c=container1' Mar 8 17:40:34.164: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:40:34.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-77" for this suite. STEP: Destroying namespace "webhook-77-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.062 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":159,"skipped":2787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:40:34.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:40:34.310: INFO: Creating deployment "webserver-deployment" Mar 8 17:40:34.315: INFO: Waiting for observed generation 1 Mar 8 17:40:36.324: INFO: Waiting for all required pods to come up Mar 8 17:40:36.327: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 8 17:40:38.337: INFO: Waiting for deployment "webserver-deployment" to complete Mar 8 17:40:38.343: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 8 17:40:38.351: INFO: Updating deployment webserver-deployment Mar 8 17:40:38.351: INFO: Waiting for observed generation 2 Mar 8 17:40:40.362: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 8 17:40:40.364: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 8 17:40:40.368: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 17:40:40.374: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 8 17:40:40.374: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 8 17:40:40.376: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 17:40:40.380: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 8 17:40:40.380: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 8 17:40:40.400: INFO: Updating deployment webserver-deployment Mar 8 17:40:40.400: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 8 17:40:40.433: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 8 17:40:40.523: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 17:40:42.575: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6302 /apis/apps/v1/namespaces/deployment-6302/deployments/webserver-deployment c7bc9960-a711-4e2a-846f-5c11580367cd 56936 3 2020-03-08 17:40:34 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a24cb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 17:40:40 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-08 17:40:40 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 8 17:40:42.577: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6302 /apis/apps/v1/namespaces/deployment-6302/replicasets/webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 56933 3 2020-03-08 17:40:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c7bc9960-a711-4e2a-846f-5c11580367cd 0xc003a25207 0xc003a25208}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a25278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:40:42.577: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 8 17:40:42.577: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6302 /apis/apps/v1/namespaces/deployment-6302/replicasets/webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 56922 3 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c7bc9960-a711-4e2a-846f-5c11580367cd 0xc003a25147 0xc003a25148}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a251a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:40:42.581: INFO: Pod "webserver-deployment-595b5b9587-2njdd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2njdd webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-2njdd 21a1f91f-ed95-4410-86ec-db55b22eac87 56980 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002874a27 0xc002874a28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.581: INFO: Pod "webserver-deployment-595b5b9587-5rfh9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5rfh9 webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-5rfh9 656aa7d6-90cc-46a5-ad69-f61292df5eb6 56753 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002874b87 0xc002874b88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.215,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aa6971c861943fe6fc175999a3fcb76bf997b9259f0dc322fc11f21b3b68a496,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.581: INFO: Pod "webserver-deployment-595b5b9587-8md2h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8md2h webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-8md2h a73de521-5c55-4a36-aee2-5d95fdad461a 56737 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002874d07 0xc002874d08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.34,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c7820c0b8584c250e9e55dfdc50f83b53318ae4c220c60e395da18d7e9f5ab58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.581: INFO: Pod "webserver-deployment-595b5b9587-99qp4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-99qp4 webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-99qp4 0e731237-4fd7-45a8-bcee-05ea82bb747b 56939 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002874e80 0xc002874e81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.582: INFO: Pod "webserver-deployment-595b5b9587-b97sc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b97sc webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-b97sc 14562cf4-5aac-473b-bfd5-da9a06c599f8 56990 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002874fd7 0xc002874fd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.582: INFO: Pod "webserver-deployment-595b5b9587-c6289" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c6289 webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-c6289 cb30764b-1b18-4729-8e27-707ddcb362a0 56759 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875137 0xc002875138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.217,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://899085738cbdae409ed3e2df9146333121e0f507bfbdb8f88d87afcbd8e85869,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.582: INFO: Pod "webserver-deployment-595b5b9587-d5bpn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d5bpn webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-d5bpn ad42449b-8062-4c12-a753-8de0ddae01c6 56762 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc0028752b7 0xc0028752b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.214,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://04c5bdc33ee56783ec2e3df72b601e408c27d8cd7f4a577cfa0eca991ea92881,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.582: INFO: Pod "webserver-deployment-595b5b9587-fk6rs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fk6rs webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-fk6rs eb0748e2-65af-452d-babd-56ba03e365db 56992 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875447 0xc002875448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.582: INFO: Pod "webserver-deployment-595b5b9587-flblm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-flblm webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-flblm 1c60a813-4858-40c8-a9e8-80789aa09da0 56996 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc0028755a7 0xc0028755a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.582: INFO: Pod "webserver-deployment-595b5b9587-h4cxr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h4cxr webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-h4cxr 7e8cddb4-2bc1-41e4-84ca-deb3f30b80fe 56742 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875707 0xc002875708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.35,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1a219b622bca7846c0b2171bac75f6ca8a00b9e09f986a1999f01bc622ff71cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.582: INFO: Pod "webserver-deployment-595b5b9587-hpsqv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hpsqv webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-hpsqv 4db612f3-8398-4855-b720-555601de60ec 56955 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875880 0xc002875881}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.583: INFO: Pod "webserver-deployment-595b5b9587-lxzjv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lxzjv webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-lxzjv c607064b-5eae-4553-85c9-212e79de5226 56948 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc0028759d7 0xc0028759d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.583: INFO: Pod "webserver-deployment-595b5b9587-mjnzc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mjnzc webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-mjnzc 5d0ff7eb-d0f8-4881-98a0-c5c79e29f541 56765 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875b37 0xc002875b38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.37,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0884d4267765d6ca52e8fc0d949fa6efd7f91727e991e35562daba0ec2f5b883,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.583: INFO: Pod "webserver-deployment-595b5b9587-pqw6q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pqw6q webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-pqw6q a936744b-4aad-4c9f-a4f1-98d25b3ba7bd 56920 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875cb0 0xc002875cb1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.583: INFO: Pod "webserver-deployment-595b5b9587-qd945" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qd945 webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-qd945 f5c04107-87a7-42d0-ac19-5addd1c8cb6b 56988 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875dc0 0xc002875dc1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.583: INFO: Pod "webserver-deployment-595b5b9587-rd2mv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rd2mv webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-rd2mv 94ebcaaa-4942-4719-af19-06f3deb2862f 56926 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc002875f17 0xc002875f18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.583: INFO: Pod "webserver-deployment-595b5b9587-tp4mp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tp4mp webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-tp4mp ea48e097-46f8-4846-bcf8-bcd062c21ca8 56756 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc004770077 0xc004770078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.216,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a9dc80e2cf2fa173c11bd216768fbd5f96dfdfe06264783824113eb6dcd5199,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.583: INFO: Pod "webserver-deployment-595b5b9587-vcbl6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vcbl6 webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-vcbl6 dec8caf7-1302-4f08-a774-85ecc67df7cb 56749 0 2020-03-08 17:40:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc0047701f7 0xc0047701f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.213,StartTime:2020-03-08 17:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:40:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a125124200f067fa53d08b556eaa6ba30ed296413486c1303190d5a8647c89cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.584: INFO: Pod "webserver-deployment-595b5b9587-x8z22" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x8z22 webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-x8z22 783e5ad0-fcaa-498b-b1d2-20ff19a98d21 56952 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc004770377 0xc004770378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.584: INFO: Pod "webserver-deployment-595b5b9587-xzgnd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xzgnd webserver-deployment-595b5b9587- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-595b5b9587-xzgnd e1beaf67-3f85-4997-8db7-ca8408024563 56998 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 05eb15d7-68de-4c07-a5db-f3d93a6c41c9 0xc0047704d7 0xc0047704d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.584: INFO: Pod "webserver-deployment-c7997dcc8-7jhxg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7jhxg webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-7jhxg a29138d5-506e-4089-b5cf-31f9aba1105d 56878 0 2020-03-08 17:40:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004770637 0xc004770638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.218,StartTime:2020-03-08 17:40:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.584: INFO: Pod "webserver-deployment-c7997dcc8-9htz2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9htz2 webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-9htz2 e8661fbd-2284-4e6b-8b1d-bf8585032b9a 56927 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc0047707e0 0xc0047707e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.584: INFO: Pod "webserver-deployment-c7997dcc8-9vczt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9vczt webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-9vczt 412717ae-5222-41f7-a85a-9c84a8209ee0 56989 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004770900 0xc004770901}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.584: INFO: Pod "webserver-deployment-c7997dcc8-dt6sb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dt6sb webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-dt6sb 2922d93c-8620-4fbe-937b-e45598f56c8e 56823 0 2020-03-08 17:40:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004770a70 0xc004770a71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.584: INFO: Pod "webserver-deployment-c7997dcc8-hkcg8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hkcg8 webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-hkcg8 ff2fcfbd-1b33-4900-9a32-7370b658e912 56941 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004770bf0 0xc004770bf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-hvmlt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hvmlt webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-hvmlt f2438d8a-3b3a-4ddb-b422-2cf59471406e 56824 0 2020-03-08 17:40:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004770d80 0xc004770d81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-jq7zq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jq7zq webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-jq7zq 74aa6a05-10f6-444a-89c8-47de99d8d885 56979 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004770f00 0xc004770f01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-k798j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k798j webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-k798j 8cf4c5d5-efbe-43e2-bc3d-96b89aa3c569 56916 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004771080 0xc004771081}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-qvn72" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qvn72 webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-qvn72 15ad1933-1e8d-4f44-aa01-46edd7c76414 56804 0 2020-03-08 17:40:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc0047711d0 0xc0047711d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-shn2p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-shn2p webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-shn2p 8cca47d4-9a5d-43d0-a7db-7fe4bd19baff 56932 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004771340 0xc004771341}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-t6kvx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t6kvx webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-t6kvx 8f8fc14c-f93c-46c5-93b0-cfdcb10887ff 56915 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc0047714b0 0xc0047714b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-ww7th" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ww7th webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-ww7th 6ea51530-4bae-4df1-bfca-a665bea61ba2 56810 0 2020-03-08 17:40:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc0047715d0 0xc0047715d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-08 17:40:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:40:42.585: INFO: Pod "webserver-deployment-c7997dcc8-zsn24" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zsn24 webserver-deployment-c7997dcc8- deployment-6302 /api/v1/namespaces/deployment-6302/pods/webserver-deployment-c7997dcc8-zsn24 6dd1757c-723e-4fa9-b104-441b9d57168e 56957 0 2020-03-08 17:40:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ead22805-e621-47ca-8c53-c2b595fae699 0xc004771740 0xc004771741}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz2zk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz2zk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz2zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:40:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-08 17:40:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:40:42.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6302" for this suite. • [SLOW TEST:8.349 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":160,"skipped":2812,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:40:42.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-0a0c24e4-cba6-4e76-9200-1a67c4b1c226 in namespace container-probe-4588 Mar 8 17:40:46.823: INFO: Started pod busybox-0a0c24e4-cba6-4e76-9200-1a67c4b1c226 in namespace container-probe-4588 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 17:40:46.825: INFO: Initial restart count of pod busybox-0a0c24e4-cba6-4e76-9200-1a67c4b1c226 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:44:47.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4588" for this suite. • [SLOW TEST:244.814 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:44:47.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 8 17:44:55.500: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:55.500: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:55.536937 7 log.go:172] (0xc002cd6370) (0xc001b46b40) Create stream I0308 17:44:55.536974 7 log.go:172] (0xc002cd6370) (0xc001b46b40) Stream added, broadcasting: 1 I0308 17:44:55.539589 7 log.go:172] (0xc002cd6370) Reply frame received for 1 I0308 17:44:55.540453 7 log.go:172] (0xc002cd6370) (0xc002a3c3c0) Create stream I0308 17:44:55.540482 7 log.go:172] (0xc002cd6370) (0xc002a3c3c0) Stream added, broadcasting: 3 I0308 17:44:55.541731 7 log.go:172] (0xc002cd6370) Reply frame received for 3 I0308 17:44:55.541773 7 log.go:172] (0xc002cd6370) (0xc001b46be0) Create stream I0308 17:44:55.541796 7 log.go:172] (0xc002cd6370) (0xc001b46be0) Stream added, broadcasting: 5 I0308 17:44:55.545154 7 log.go:172] (0xc002cd6370) Reply frame received for 5 I0308 17:44:55.598380 7 log.go:172] (0xc002cd6370) Data frame received for 5 I0308 17:44:55.598416 7 log.go:172] (0xc001b46be0) (5) Data frame handling I0308 17:44:55.598439 7 log.go:172] (0xc002cd6370) Data frame received for 3 I0308 17:44:55.598448 7 log.go:172] (0xc002a3c3c0) (3) Data frame handling I0308 17:44:55.598456 7 log.go:172] (0xc002a3c3c0) (3) Data frame sent I0308 17:44:55.598464 7 log.go:172] (0xc002cd6370) Data frame received for 3 I0308 17:44:55.598473 7 log.go:172] (0xc002a3c3c0) (3) Data frame handling I0308 17:44:55.599746 7 log.go:172] (0xc002cd6370) Data frame received for 1 I0308 17:44:55.599771 7 log.go:172] (0xc001b46b40) (1) Data frame handling I0308 17:44:55.599791 7 log.go:172] (0xc001b46b40) (1) Data frame sent I0308 17:44:55.599817 7 log.go:172] (0xc002cd6370) (0xc001b46b40) Stream removed, broadcasting: 1 I0308 17:44:55.599922 7 log.go:172] (0xc002cd6370) Go away received I0308 17:44:55.599959 7 log.go:172] (0xc002cd6370) (0xc001b46b40) Stream removed, broadcasting: 1 I0308 17:44:55.599971 7 log.go:172] (0xc002cd6370) (0xc002a3c3c0) Stream removed, broadcasting: 3 I0308 17:44:55.599980 7 log.go:172] (0xc002cd6370) (0xc001b46be0) Stream removed, broadcasting: 5 Mar 8 17:44:55.599: INFO: Exec stderr: "" Mar 8 17:44:55.600: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:55.600: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:55.633702 7 log.go:172] (0xc002be2b00) (0xc0012dd900) Create stream I0308 17:44:55.633732 7 log.go:172] (0xc002be2b00) (0xc0012dd900) Stream added, broadcasting: 1 I0308 17:44:55.635996 7 log.go:172] (0xc002be2b00) Reply frame received for 1 I0308 17:44:55.636042 7 log.go:172] (0xc002be2b00) (0xc001b46d20) Create stream I0308 17:44:55.636056 7 log.go:172] (0xc002be2b00) (0xc001b46d20) Stream added, broadcasting: 3 I0308 17:44:55.636851 7 log.go:172] (0xc002be2b00) Reply frame received for 3 I0308 17:44:55.636889 7 log.go:172] (0xc002be2b00) (0xc001b46e60) Create stream I0308 17:44:55.636901 7 log.go:172] (0xc002be2b00) (0xc001b46e60) Stream added, broadcasting: 5 I0308 17:44:55.637781 7 log.go:172] (0xc002be2b00) Reply frame received for 5 I0308 17:44:55.693903 7 log.go:172] (0xc002be2b00) Data frame received for 5 I0308 17:44:55.693938 7 log.go:172] (0xc001b46e60) (5) Data frame handling I0308 17:44:55.693959 7 log.go:172] (0xc002be2b00) Data frame received for 3 I0308 17:44:55.693972 7 log.go:172] (0xc001b46d20) (3) Data frame handling I0308 17:44:55.693986 7 log.go:172] (0xc001b46d20) (3) Data frame sent I0308 17:44:55.693997 7 log.go:172] (0xc002be2b00) Data frame received for 3 I0308 17:44:55.694008 7 log.go:172] (0xc001b46d20) (3) Data frame handling I0308 17:44:55.695233 7 log.go:172] (0xc002be2b00) Data frame received for 1 I0308 17:44:55.695264 7 log.go:172] (0xc0012dd900) (1) Data frame handling I0308 17:44:55.695281 7 log.go:172] (0xc0012dd900) (1) Data frame sent I0308 17:44:55.695302 7 log.go:172] (0xc002be2b00) (0xc0012dd900) Stream removed, broadcasting: 1 I0308 17:44:55.695330 7 log.go:172] (0xc002be2b00) Go away received I0308 17:44:55.695425 7 log.go:172] (0xc002be2b00) (0xc0012dd900) Stream removed, broadcasting: 1 I0308 17:44:55.695451 7 log.go:172] (0xc002be2b00) (0xc001b46d20) Stream removed, broadcasting: 3 I0308 17:44:55.695470 7 log.go:172] (0xc002be2b00) (0xc001b46e60) Stream removed, broadcasting: 5 Mar 8 17:44:55.695: INFO: Exec stderr: "" Mar 8 17:44:55.695: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:55.695: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:55.726986 7 log.go:172] (0xc002cd69a0) (0xc001b47220) Create stream I0308 17:44:55.727027 7 log.go:172] (0xc002cd69a0) (0xc001b47220) Stream added, broadcasting: 1 I0308 17:44:55.730340 7 log.go:172] (0xc002cd69a0) Reply frame received for 1 I0308 17:44:55.730382 7 log.go:172] (0xc002cd69a0) (0xc002a3c460) Create stream I0308 17:44:55.730403 7 log.go:172] (0xc002cd69a0) (0xc002a3c460) Stream added, broadcasting: 3 I0308 17:44:55.731488 7 log.go:172] (0xc002cd69a0) Reply frame received for 3 I0308 17:44:55.731519 7 log.go:172] (0xc002cd69a0) (0xc0012ddc20) Create stream I0308 17:44:55.731531 7 log.go:172] (0xc002cd69a0) (0xc0012ddc20) Stream added, broadcasting: 5 I0308 17:44:55.732510 7 log.go:172] (0xc002cd69a0) Reply frame received for 5 I0308 17:44:55.785030 7 log.go:172] (0xc002cd69a0) Data frame received for 5 I0308 17:44:55.785080 7 log.go:172] (0xc0012ddc20) (5) Data frame handling I0308 17:44:55.785116 7 log.go:172] (0xc002cd69a0) Data frame received for 3 I0308 17:44:55.785144 7 log.go:172] (0xc002a3c460) (3) Data frame handling I0308 17:44:55.785169 7 log.go:172] (0xc002a3c460) (3) Data frame sent I0308 17:44:55.785190 7 log.go:172] (0xc002cd69a0) Data frame received for 3 I0308 17:44:55.785203 7 log.go:172] (0xc002a3c460) (3) Data frame handling I0308 17:44:55.786356 7 log.go:172] (0xc002cd69a0) Data frame received for 1 I0308 17:44:55.786376 7 log.go:172] (0xc001b47220) (1) Data frame handling I0308 17:44:55.786398 7 log.go:172] (0xc001b47220) (1) Data frame sent I0308 17:44:55.787033 7 log.go:172] (0xc002cd69a0) (0xc001b47220) Stream removed, broadcasting: 1 I0308 17:44:55.787065 7 log.go:172] (0xc002cd69a0) Go away received I0308 17:44:55.787142 7 log.go:172] (0xc002cd69a0) (0xc001b47220) Stream removed, broadcasting: 1 I0308 17:44:55.787167 7 log.go:172] (0xc002cd69a0) (0xc002a3c460) Stream removed, broadcasting: 3 I0308 17:44:55.787183 7 log.go:172] (0xc002cd69a0) (0xc0012ddc20) Stream removed, broadcasting: 5 Mar 8 17:44:55.787: INFO: Exec stderr: "" Mar 8 17:44:55.787: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:55.787: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:55.821379 7 log.go:172] (0xc002d36d10) (0xc0010fff40) Create stream I0308 17:44:55.821406 7 log.go:172] (0xc002d36d10) (0xc0010fff40) Stream added, broadcasting: 1 I0308 17:44:55.824121 7 log.go:172] (0xc002d36d10) Reply frame received for 1 I0308 17:44:55.824172 7 log.go:172] (0xc002d36d10) (0xc001576000) Create stream I0308 17:44:55.824183 7 log.go:172] (0xc002d36d10) (0xc001576000) Stream added, broadcasting: 3 I0308 17:44:55.825142 7 log.go:172] (0xc002d36d10) Reply frame received for 3 I0308 17:44:55.825174 7 log.go:172] (0xc002d36d10) (0xc001b47400) Create stream I0308 17:44:55.825186 7 log.go:172] (0xc002d36d10) (0xc001b47400) Stream added, broadcasting: 5 I0308 17:44:55.826139 7 log.go:172] (0xc002d36d10) Reply frame received for 5 I0308 17:44:55.896831 7 log.go:172] (0xc002d36d10) Data frame received for 5 I0308 17:44:55.896868 7 log.go:172] (0xc001b47400) (5) Data frame handling I0308 17:44:55.896909 7 log.go:172] (0xc002d36d10) Data frame received for 3 I0308 17:44:55.896933 7 log.go:172] (0xc001576000) (3) Data frame handling I0308 17:44:55.896953 7 log.go:172] (0xc001576000) (3) Data frame sent I0308 17:44:55.897080 7 log.go:172] (0xc002d36d10) Data frame received for 3 I0308 17:44:55.897107 7 log.go:172] (0xc001576000) (3) Data frame handling I0308 17:44:55.898043 7 log.go:172] (0xc002d36d10) Data frame received for 1 I0308 17:44:55.898058 7 log.go:172] (0xc0010fff40) (1) Data frame handling I0308 17:44:55.898066 7 log.go:172] (0xc0010fff40) (1) Data frame sent I0308 17:44:55.898081 7 log.go:172] (0xc002d36d10) (0xc0010fff40) Stream removed, broadcasting: 1 I0308 17:44:55.898099 7 log.go:172] (0xc002d36d10) Go away received I0308 17:44:55.898252 7 log.go:172] (0xc002d36d10) (0xc0010fff40) Stream removed, broadcasting: 1 I0308 17:44:55.898274 7 log.go:172] (0xc002d36d10) (0xc001576000) Stream removed, broadcasting: 3 I0308 17:44:55.898286 7 log.go:172] (0xc002d36d10) (0xc001b47400) Stream removed, broadcasting: 5 Mar 8 17:44:55.898: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 8 17:44:55.898: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:55.898: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:55.929534 7 log.go:172] (0xc002cb6840) (0xc002a3c960) Create stream I0308 17:44:55.929561 7 log.go:172] (0xc002cb6840) (0xc002a3c960) Stream added, broadcasting: 1 I0308 17:44:55.931734 7 log.go:172] (0xc002cb6840) Reply frame received for 1 I0308 17:44:55.931773 7 log.go:172] (0xc002cb6840) (0xc0015760a0) Create stream I0308 17:44:55.931784 7 log.go:172] (0xc002cb6840) (0xc0015760a0) Stream added, broadcasting: 3 I0308 17:44:55.932629 7 log.go:172] (0xc002cb6840) Reply frame received for 3 I0308 17:44:55.932658 7 log.go:172] (0xc002cb6840) (0xc001576140) Create stream I0308 17:44:55.932674 7 log.go:172] (0xc002cb6840) (0xc001576140) Stream added, broadcasting: 5 I0308 17:44:55.933532 7 log.go:172] (0xc002cb6840) Reply frame received for 5 I0308 17:44:55.996588 7 log.go:172] (0xc002cb6840) Data frame received for 3 I0308 17:44:55.996618 7 log.go:172] (0xc0015760a0) (3) Data frame handling I0308 17:44:55.996627 7 log.go:172] (0xc0015760a0) (3) Data frame sent I0308 17:44:55.996649 7 log.go:172] (0xc002cb6840) Data frame received for 5 I0308 17:44:55.996676 7 log.go:172] (0xc001576140) (5) Data frame handling I0308 17:44:55.996699 7 log.go:172] (0xc002cb6840) Data frame received for 3 I0308 17:44:55.996712 7 log.go:172] (0xc0015760a0) (3) Data frame handling I0308 17:44:55.997758 7 log.go:172] (0xc002cb6840) Data frame received for 1 I0308 17:44:55.997780 7 log.go:172] (0xc002a3c960) (1) Data frame handling I0308 17:44:55.997797 7 log.go:172] (0xc002a3c960) (1) Data frame sent I0308 17:44:55.997812 7 log.go:172] (0xc002cb6840) (0xc002a3c960) Stream removed, broadcasting: 1 I0308 17:44:55.997836 7 log.go:172] (0xc002cb6840) Go away received I0308 17:44:55.998023 7 log.go:172] (0xc002cb6840) (0xc002a3c960) Stream removed, broadcasting: 1 I0308 17:44:55.998040 7 log.go:172] (0xc002cb6840) (0xc0015760a0) Stream removed, broadcasting: 3 I0308 17:44:55.998048 7 log.go:172] (0xc002cb6840) (0xc001576140) Stream removed, broadcasting: 5 Mar 8 17:44:55.998: INFO: Exec stderr: "" Mar 8 17:44:55.998: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:55.998: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:56.025878 7 log.go:172] (0xc0014da370) (0xc001c64280) Create stream I0308 17:44:56.025904 7 log.go:172] (0xc0014da370) (0xc001c64280) Stream added, broadcasting: 1 I0308 17:44:56.028016 7 log.go:172] (0xc0014da370) Reply frame received for 1 I0308 17:44:56.028054 7 log.go:172] (0xc0014da370) (0xc0012ddcc0) Create stream I0308 17:44:56.028073 7 log.go:172] (0xc0014da370) (0xc0012ddcc0) Stream added, broadcasting: 3 I0308 17:44:56.028917 7 log.go:172] (0xc0014da370) Reply frame received for 3 I0308 17:44:56.028950 7 log.go:172] (0xc0014da370) (0xc001c64320) Create stream I0308 17:44:56.028961 7 log.go:172] (0xc0014da370) (0xc001c64320) Stream added, broadcasting: 5 I0308 17:44:56.029593 7 log.go:172] (0xc0014da370) Reply frame received for 5 I0308 17:44:56.079869 7 log.go:172] (0xc0014da370) Data frame received for 5 I0308 17:44:56.079899 7 log.go:172] (0xc001c64320) (5) Data frame handling I0308 17:44:56.079921 7 log.go:172] (0xc0014da370) Data frame received for 3 I0308 17:44:56.079932 7 log.go:172] (0xc0012ddcc0) (3) Data frame handling I0308 17:44:56.079957 7 log.go:172] (0xc0012ddcc0) (3) Data frame sent I0308 17:44:56.079967 7 log.go:172] (0xc0014da370) Data frame received for 3 I0308 17:44:56.079976 7 log.go:172] (0xc0012ddcc0) (3) Data frame handling I0308 17:44:56.081025 7 log.go:172] (0xc0014da370) Data frame received for 1 I0308 17:44:56.081043 7 log.go:172] (0xc001c64280) (1) Data frame handling I0308 17:44:56.081057 7 log.go:172] (0xc001c64280) (1) Data frame sent I0308 17:44:56.081075 7 log.go:172] (0xc0014da370) (0xc001c64280) Stream removed, broadcasting: 1 I0308 17:44:56.081090 7 log.go:172] (0xc0014da370) Go away received I0308 17:44:56.081208 7 log.go:172] (0xc0014da370) (0xc001c64280) Stream removed, broadcasting: 1 I0308 17:44:56.081234 7 log.go:172] (0xc0014da370) (0xc0012ddcc0) Stream removed, broadcasting: 3 I0308 17:44:56.081244 7 log.go:172] (0xc0014da370) (0xc001c64320) Stream removed, broadcasting: 5 Mar 8 17:44:56.081: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 8 17:44:56.081: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:56.081: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:56.105097 7 log.go:172] (0xc002cb6e70) (0xc002a3cc80) Create stream I0308 17:44:56.105116 7 log.go:172] (0xc002cb6e70) (0xc002a3cc80) Stream added, broadcasting: 1 I0308 17:44:56.107134 7 log.go:172] (0xc002cb6e70) Reply frame received for 1 I0308 17:44:56.107162 7 log.go:172] (0xc002cb6e70) (0xc001576280) Create stream I0308 17:44:56.107170 7 log.go:172] (0xc002cb6e70) (0xc001576280) Stream added, broadcasting: 3 I0308 17:44:56.108188 7 log.go:172] (0xc002cb6e70) Reply frame received for 3 I0308 17:44:56.108227 7 log.go:172] (0xc002cb6e70) (0xc001c64640) Create stream I0308 17:44:56.108237 7 log.go:172] (0xc002cb6e70) (0xc001c64640) Stream added, broadcasting: 5 I0308 17:44:56.109022 7 log.go:172] (0xc002cb6e70) Reply frame received for 5 I0308 17:44:56.181170 7 log.go:172] (0xc002cb6e70) Data frame received for 3 I0308 17:44:56.181217 7 log.go:172] (0xc001576280) (3) Data frame handling I0308 17:44:56.181239 7 log.go:172] (0xc001576280) (3) Data frame sent I0308 17:44:56.181257 7 log.go:172] (0xc002cb6e70) Data frame received for 3 I0308 17:44:56.181272 7 log.go:172] (0xc001576280) (3) Data frame handling I0308 17:44:56.181294 7 log.go:172] (0xc002cb6e70) Data frame received for 5 I0308 17:44:56.181321 7 log.go:172] (0xc001c64640) (5) Data frame handling I0308 17:44:56.182548 7 log.go:172] (0xc002cb6e70) Data frame received for 1 I0308 17:44:56.182572 7 log.go:172] (0xc002a3cc80) (1) Data frame handling I0308 17:44:56.182592 7 log.go:172] (0xc002a3cc80) (1) Data frame sent I0308 17:44:56.182750 7 log.go:172] (0xc002cb6e70) (0xc002a3cc80) Stream removed, broadcasting: 1 I0308 17:44:56.182806 7 log.go:172] (0xc002cb6e70) Go away received I0308 17:44:56.182853 7 log.go:172] (0xc002cb6e70) (0xc002a3cc80) Stream removed, broadcasting: 1 I0308 17:44:56.182870 7 log.go:172] (0xc002cb6e70) (0xc001576280) Stream removed, broadcasting: 3 I0308 17:44:56.182879 7 log.go:172] (0xc002cb6e70) (0xc001c64640) Stream removed, broadcasting: 5 Mar 8 17:44:56.182: INFO: Exec stderr: "" Mar 8 17:44:56.182: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:56.182: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:56.206885 7 log.go:172] (0xc002be3130) (0xc000cd0280) Create stream I0308 17:44:56.206915 7 log.go:172] (0xc002be3130) (0xc000cd0280) Stream added, broadcasting: 1 I0308 17:44:56.208911 7 log.go:172] (0xc002be3130) Reply frame received for 1 I0308 17:44:56.208953 7 log.go:172] (0xc002be3130) (0xc001b474a0) Create stream I0308 17:44:56.208967 7 log.go:172] (0xc002be3130) (0xc001b474a0) Stream added, broadcasting: 3 I0308 17:44:56.209736 7 log.go:172] (0xc002be3130) Reply frame received for 3 I0308 17:44:56.209770 7 log.go:172] (0xc002be3130) (0xc001c64820) Create stream I0308 17:44:56.209779 7 log.go:172] (0xc002be3130) (0xc001c64820) Stream added, broadcasting: 5 I0308 17:44:56.210399 7 log.go:172] (0xc002be3130) Reply frame received for 5 I0308 17:44:56.255727 7 log.go:172] (0xc002be3130) Data frame received for 3 I0308 17:44:56.255753 7 log.go:172] (0xc001b474a0) (3) Data frame handling I0308 17:44:56.255769 7 log.go:172] (0xc001b474a0) (3) Data frame sent I0308 17:44:56.255792 7 log.go:172] (0xc002be3130) Data frame received for 3 I0308 17:44:56.255797 7 log.go:172] (0xc001b474a0) (3) Data frame handling I0308 17:44:56.255844 7 log.go:172] (0xc002be3130) Data frame received for 5 I0308 17:44:56.255872 7 log.go:172] (0xc001c64820) (5) Data frame handling I0308 17:44:56.257078 7 log.go:172] (0xc002be3130) Data frame received for 1 I0308 17:44:56.257092 7 log.go:172] (0xc000cd0280) (1) Data frame handling I0308 17:44:56.257105 7 log.go:172] (0xc000cd0280) (1) Data frame sent I0308 17:44:56.257112 7 log.go:172] (0xc002be3130) (0xc000cd0280) Stream removed, broadcasting: 1 I0308 17:44:56.257120 7 log.go:172] (0xc002be3130) Go away received I0308 17:44:56.257246 7 log.go:172] (0xc002be3130) (0xc000cd0280) Stream removed, broadcasting: 1 I0308 17:44:56.257264 7 log.go:172] (0xc002be3130) (0xc001b474a0) Stream removed, broadcasting: 3 I0308 17:44:56.257276 7 log.go:172] (0xc002be3130) (0xc001c64820) Stream removed, broadcasting: 5 Mar 8 17:44:56.257: INFO: Exec stderr: "" Mar 8 17:44:56.257: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:56.257: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:56.277121 7 log.go:172] (0xc002be3760) (0xc000cd0780) Create stream I0308 17:44:56.277145 7 log.go:172] (0xc002be3760) (0xc000cd0780) Stream added, broadcasting: 1 I0308 17:44:56.279722 7 log.go:172] (0xc002be3760) Reply frame received for 1 I0308 17:44:56.279761 7 log.go:172] (0xc002be3760) (0xc001c648c0) Create stream I0308 17:44:56.279773 7 log.go:172] (0xc002be3760) (0xc001c648c0) Stream added, broadcasting: 3 I0308 17:44:56.282220 7 log.go:172] (0xc002be3760) Reply frame received for 3 I0308 17:44:56.282249 7 log.go:172] (0xc002be3760) (0xc001b47540) Create stream I0308 17:44:56.282259 7 log.go:172] (0xc002be3760) (0xc001b47540) Stream added, broadcasting: 5 I0308 17:44:56.282911 7 log.go:172] (0xc002be3760) Reply frame received for 5 I0308 17:44:56.327207 7 log.go:172] (0xc002be3760) Data frame received for 5 I0308 17:44:56.327239 7 log.go:172] (0xc001b47540) (5) Data frame handling I0308 17:44:56.327260 7 log.go:172] (0xc002be3760) Data frame received for 3 I0308 17:44:56.327274 7 log.go:172] (0xc001c648c0) (3) Data frame handling I0308 17:44:56.327287 7 log.go:172] (0xc001c648c0) (3) Data frame sent I0308 17:44:56.327299 7 log.go:172] (0xc002be3760) Data frame received for 3 I0308 17:44:56.327306 7 log.go:172] (0xc001c648c0) (3) Data frame handling I0308 17:44:56.328139 7 log.go:172] (0xc002be3760) Data frame received for 1 I0308 17:44:56.328152 7 log.go:172] (0xc000cd0780) (1) Data frame handling I0308 17:44:56.328159 7 log.go:172] (0xc000cd0780) (1) Data frame sent I0308 17:44:56.328167 7 log.go:172] (0xc002be3760) (0xc000cd0780) Stream removed, broadcasting: 1 I0308 17:44:56.328183 7 log.go:172] (0xc002be3760) Go away received I0308 17:44:56.328308 7 log.go:172] (0xc002be3760) (0xc000cd0780) Stream removed, broadcasting: 1 I0308 17:44:56.328328 7 log.go:172] (0xc002be3760) (0xc001c648c0) Stream removed, broadcasting: 3 I0308 17:44:56.328337 7 log.go:172] (0xc002be3760) (0xc001b47540) Stream removed, broadcasting: 5 Mar 8 17:44:56.328: INFO: Exec stderr: "" Mar 8 17:44:56.328: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3694 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:44:56.328: INFO: >>> kubeConfig: /root/.kube/config I0308 17:44:56.354702 7 log.go:172] (0xc002d37340) (0xc0015766e0) Create stream I0308 17:44:56.354720 7 log.go:172] (0xc002d37340) (0xc0015766e0) Stream added, broadcasting: 1 I0308 17:44:56.356354 7 log.go:172] (0xc002d37340) Reply frame received for 1 I0308 17:44:56.356378 7 log.go:172] (0xc002d37340) (0xc002a3cd20) Create stream I0308 17:44:56.356387 7 log.go:172] (0xc002d37340) (0xc002a3cd20) Stream added, broadcasting: 3 I0308 17:44:56.357068 7 log.go:172] (0xc002d37340) Reply frame received for 3 I0308 17:44:56.357100 7 log.go:172] (0xc002d37340) (0xc001b475e0) Create stream I0308 17:44:56.357111 7 log.go:172] (0xc002d37340) (0xc001b475e0) Stream added, broadcasting: 5 I0308 17:44:56.357641 7 log.go:172] (0xc002d37340) Reply frame received for 5 I0308 17:44:56.404031 7 log.go:172] (0xc002d37340) Data frame received for 5 I0308 17:44:56.404078 7 log.go:172] (0xc001b475e0) (5) Data frame handling I0308 17:44:56.404095 7 log.go:172] (0xc002d37340) Data frame received for 3 I0308 17:44:56.404101 7 log.go:172] (0xc002a3cd20) (3) Data frame handling I0308 17:44:56.404108 7 log.go:172] (0xc002a3cd20) (3) Data frame sent I0308 17:44:56.404121 7 log.go:172] (0xc002d37340) Data frame received for 3 I0308 17:44:56.404125 7 log.go:172] (0xc002a3cd20) (3) Data frame handling I0308 17:44:56.404923 7 log.go:172] (0xc002d37340) Data frame received for 1 I0308 17:44:56.404945 7 log.go:172] (0xc0015766e0) (1) Data frame handling I0308 17:44:56.404970 7 log.go:172] (0xc0015766e0) (1) Data frame sent I0308 17:44:56.405034 7 log.go:172] (0xc002d37340) (0xc0015766e0) Stream removed, broadcasting: 1 I0308 17:44:56.405083 7 log.go:172] (0xc002d37340) Go away received I0308 17:44:56.405115 7 log.go:172] (0xc002d37340) (0xc0015766e0) Stream removed, broadcasting: 1 I0308 17:44:56.405147 7 log.go:172] (0xc002d37340) (0xc002a3cd20) Stream removed, broadcasting: 3 I0308 17:44:56.405157 7 log.go:172] (0xc002d37340) (0xc001b475e0) Stream removed, broadcasting: 5 Mar 8 17:44:56.405: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:44:56.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3694" for this suite. • [SLOW TEST:9.005 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:44:56.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:44:56.466: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 8 17:44:58.766: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:44:59.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6315" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":163,"skipped":2886,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:44:59.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Mar 8 17:44:59.863: INFO: Waiting up to 5m0s for pod "client-containers-53428f60-423d-4112-9d5f-615830302fee" in namespace "containers-9841" to be "Succeeded or Failed" Mar 8 17:44:59.880: INFO: Pod "client-containers-53428f60-423d-4112-9d5f-615830302fee": Phase="Pending", Reason="", readiness=false. Elapsed: 16.366778ms Mar 8 17:45:01.883: INFO: Pod "client-containers-53428f60-423d-4112-9d5f-615830302fee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019490757s STEP: Saw pod success Mar 8 17:45:01.883: INFO: Pod "client-containers-53428f60-423d-4112-9d5f-615830302fee" satisfied condition "Succeeded or Failed" Mar 8 17:45:01.886: INFO: Trying to get logs from node latest-worker pod client-containers-53428f60-423d-4112-9d5f-615830302fee container test-container: STEP: delete the pod Mar 8 17:45:01.909: INFO: Waiting for pod client-containers-53428f60-423d-4112-9d5f-615830302fee to disappear Mar 8 17:45:01.932: INFO: Pod client-containers-53428f60-423d-4112-9d5f-615830302fee no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:45:01.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9841" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2892,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:45:01.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 17:45:01.994: INFO: Waiting up to 5m0s for pod "pod-f5a0b095-d128-4b1f-b17c-07883c9e021e" in namespace "emptydir-7179" to be "Succeeded or Failed" Mar 8 17:45:02.003: INFO: Pod "pod-f5a0b095-d128-4b1f-b17c-07883c9e021e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.389047ms Mar 8 17:45:04.007: INFO: Pod "pod-f5a0b095-d128-4b1f-b17c-07883c9e021e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01323496s STEP: Saw pod success Mar 8 17:45:04.007: INFO: Pod "pod-f5a0b095-d128-4b1f-b17c-07883c9e021e" satisfied condition "Succeeded or Failed" Mar 8 17:45:04.010: INFO: Trying to get logs from node latest-worker pod pod-f5a0b095-d128-4b1f-b17c-07883c9e021e container test-container: STEP: delete the pod Mar 8 17:45:04.029: INFO: Waiting for pod pod-f5a0b095-d128-4b1f-b17c-07883c9e021e to disappear Mar 8 17:45:04.039: INFO: Pod pod-f5a0b095-d128-4b1f-b17c-07883c9e021e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:45:04.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7179" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:45:04.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:45:04.603: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:45:06.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286304, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286304, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286304, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286304, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:45:09.642: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:45:09.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3724" for this suite. STEP: Destroying namespace "webhook-3724-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.765 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":166,"skipped":2941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:45:09.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-4406 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4406 STEP: Deleting pre-stop pod Mar 8 17:45:18.931: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:45:18.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4406" for this suite. • [SLOW TEST:9.137 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":167,"skipped":2968,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:45:18.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:45:30.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3887" for this suite. • [SLOW TEST:11.180 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":168,"skipped":2973,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:45:30.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 17:45:30.815: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 8 17:45:32.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286330, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286330, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286330, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286330, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:45:35.857: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:45:35.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:45:37.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9534" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.978 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":169,"skipped":2975,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:45:37.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d007765c-59d2-47e5-a536-a70c3cbabcae STEP: Creating a pod to test consume configMaps Mar 8 17:45:37.162: INFO: Waiting up to 5m0s for pod "pod-configmaps-e74aa777-4e91-4488-9dc4-88d87fb546e2" in namespace "configmap-4157" to be "Succeeded or Failed" Mar 8 17:45:37.166: INFO: Pod "pod-configmaps-e74aa777-4e91-4488-9dc4-88d87fb546e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004319ms Mar 8 17:45:39.169: INFO: Pod "pod-configmaps-e74aa777-4e91-4488-9dc4-88d87fb546e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006759203s STEP: Saw pod success Mar 8 17:45:39.169: INFO: Pod "pod-configmaps-e74aa777-4e91-4488-9dc4-88d87fb546e2" satisfied condition "Succeeded or Failed" Mar 8 17:45:39.172: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e74aa777-4e91-4488-9dc4-88d87fb546e2 container configmap-volume-test: STEP: delete the pod Mar 8 17:45:39.223: INFO: Waiting for pod pod-configmaps-e74aa777-4e91-4488-9dc4-88d87fb546e2 to disappear Mar 8 17:45:39.239: INFO: Pod pod-configmaps-e74aa777-4e91-4488-9dc4-88d87fb546e2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:45:39.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4157" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":3003,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:45:39.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6897.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6897.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:45:43.366: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.369: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.372: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.375: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.383: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.386: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.395: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.398: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:43.404: INFO: Lookups using dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local] Mar 8 17:45:48.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.414: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.417: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.427: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.430: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.433: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.435: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:48.444: INFO: Lookups using dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local] Mar 8 17:45:53.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.417: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.425: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.428: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.430: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.433: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:53.438: INFO: Lookups using dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local] Mar 8 17:45:58.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.412: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.416: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.419: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.428: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.437: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:45:58.443: INFO: Lookups using dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local] Mar 8 17:46:03.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.412: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.414: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.416: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.423: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.425: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.428: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.430: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:03.434: INFO: Lookups using dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local] Mar 8 17:46:08.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.412: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.428: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.437: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local from pod dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09: the server could not find the requested resource (get pods dns-test-22da9880-5810-4c1c-9738-9f79e6117e09) Mar 8 17:46:08.442: INFO: Lookups using dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6897.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6897.svc.cluster.local jessie_udp@dns-test-service-2.dns-6897.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6897.svc.cluster.local] Mar 8 17:46:13.442: INFO: DNS probes using dns-6897/dns-test-22da9880-5810-4c1c-9738-9f79e6117e09 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:46:13.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6897" for this suite. • [SLOW TEST:34.355 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":171,"skipped":3006,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:46:13.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:46:13.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef" in namespace "downward-api-2024" to be "Succeeded or Failed" Mar 8 17:46:13.678: INFO: Pod "downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149188ms Mar 8 17:46:15.683: INFO: Pod "downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006971833s Mar 8 17:46:17.687: INFO: Pod "downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011270611s STEP: Saw pod success Mar 8 17:46:17.687: INFO: Pod "downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef" satisfied condition "Succeeded or Failed" Mar 8 17:46:17.690: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef container client-container: STEP: delete the pod Mar 8 17:46:17.737: INFO: Waiting for pod downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef to disappear Mar 8 17:46:17.744: INFO: Pod downwardapi-volume-50dc6df4-1c86-4094-89cd-e63e93edccef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:46:17.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2024" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":3025,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:46:17.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 17:46:17.812: INFO: Waiting up to 5m0s for pod "pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3" in namespace "emptydir-7110" to be "Succeeded or Failed" Mar 8 17:46:17.816: INFO: Pod "pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.416986ms Mar 8 17:46:19.820: INFO: Pod "pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007109359s Mar 8 17:46:21.824: INFO: Pod "pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011377362s STEP: Saw pod success Mar 8 17:46:21.824: INFO: Pod "pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3" satisfied condition "Succeeded or Failed" Mar 8 17:46:21.828: INFO: Trying to get logs from node latest-worker pod pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3 container test-container: STEP: delete the pod Mar 8 17:46:21.857: INFO: Waiting for pod pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3 to disappear Mar 8 17:46:21.860: INFO: Pod pod-b0a7ebd1-d207-46b5-b431-733e1a3249a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:46:21.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7110" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":3037,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:46:21.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:46:28.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-319" for this suite. STEP: Destroying namespace "nsdeletetest-3545" for this suite. Mar 8 17:46:28.113: INFO: Namespace nsdeletetest-3545 was already deleted STEP: Destroying namespace "nsdeletetest-8232" for this suite. • [SLOW TEST:6.247 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":174,"skipped":3070,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:46:28.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:46:28.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab35be74-81f6-48a9-be7b-54d902250187" in namespace "projected-668" to be "Succeeded or Failed" Mar 8 17:46:28.232: INFO: Pod "downwardapi-volume-ab35be74-81f6-48a9-be7b-54d902250187": Phase="Pending", Reason="", readiness=false. Elapsed: 24.883844ms Mar 8 17:46:30.235: INFO: Pod "downwardapi-volume-ab35be74-81f6-48a9-be7b-54d902250187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028359524s STEP: Saw pod success Mar 8 17:46:30.236: INFO: Pod "downwardapi-volume-ab35be74-81f6-48a9-be7b-54d902250187" satisfied condition "Succeeded or Failed" Mar 8 17:46:30.238: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ab35be74-81f6-48a9-be7b-54d902250187 container client-container: STEP: delete the pod Mar 8 17:46:30.267: INFO: Waiting for pod downwardapi-volume-ab35be74-81f6-48a9-be7b-54d902250187 to disappear Mar 8 17:46:30.276: INFO: Pod downwardapi-volume-ab35be74-81f6-48a9-be7b-54d902250187 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:46:30.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-668" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3086,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:46:30.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:46:30.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 8 17:46:30.869: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T17:46:30Z generation:1 name:name1 resourceVersion:58892 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f4c6746c-dc09-4f32-bd84-ab8f63dd0399] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 8 17:46:40.874: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T17:46:40Z generation:1 name:name2 resourceVersion:58940 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d67119d4-f2fc-4e40-8341-d82e8adf852b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 8 17:46:50.881: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T17:46:30Z generation:2 name:name1 resourceVersion:58970 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f4c6746c-dc09-4f32-bd84-ab8f63dd0399] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 8 17:47:00.886: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T17:46:40Z generation:2 name:name2 resourceVersion:59000 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d67119d4-f2fc-4e40-8341-d82e8adf852b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 8 17:47:10.894: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T17:46:30Z generation:2 name:name1 resourceVersion:59028 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f4c6746c-dc09-4f32-bd84-ab8f63dd0399] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 8 17:47:20.901: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T17:46:40Z generation:2 name:name2 resourceVersion:59058 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d67119d4-f2fc-4e40-8341-d82e8adf852b] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:47:31.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8640" for this suite. • [SLOW TEST:61.143 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":176,"skipped":3101,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:47:31.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8766 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8766;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8766 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8766;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8766.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8766.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8766.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8766.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8766.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8766.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8766.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8766.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.38.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.38.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.38.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.38.71_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8766 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8766;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8766 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8766;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8766.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8766.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8766.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8766.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8766.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8766.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8766.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8766.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8766.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.38.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.38.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.38.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.38.71_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:47:35.585: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.590: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.593: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.597: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.601: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.604: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.612: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.633: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.637: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.639: INFO: Unable to read jessie_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.645: INFO: Unable to read jessie_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.648: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:35.676: INFO: Lookups using dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8766 wheezy_tcp@dns-test-service.dns-8766 wheezy_udp@dns-test-service.dns-8766.svc wheezy_tcp@dns-test-service.dns-8766.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8766.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8766 jessie_tcp@dns-test-service.dns-8766 jessie_udp@dns-test-service.dns-8766.svc jessie_tcp@dns-test-service.dns-8766.svc] Mar 8 17:47:40.679: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.683: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.686: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.692: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.694: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.714: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.716: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.718: INFO: Unable to read jessie_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.721: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.723: INFO: Unable to read jessie_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.725: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:40.743: INFO: Lookups using dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8766 wheezy_tcp@dns-test-service.dns-8766 wheezy_udp@dns-test-service.dns-8766.svc wheezy_tcp@dns-test-service.dns-8766.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8766 jessie_tcp@dns-test-service.dns-8766 jessie_udp@dns-test-service.dns-8766.svc jessie_tcp@dns-test-service.dns-8766.svc] Mar 8 17:47:45.696: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.700: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.706: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.709: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.711: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.739: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.742: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.745: INFO: Unable to read jessie_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.748: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.750: INFO: Unable to read jessie_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.753: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:45.774: INFO: Lookups using dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8766 wheezy_tcp@dns-test-service.dns-8766 wheezy_udp@dns-test-service.dns-8766.svc wheezy_tcp@dns-test-service.dns-8766.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8766 jessie_tcp@dns-test-service.dns-8766 jessie_udp@dns-test-service.dns-8766.svc jessie_tcp@dns-test-service.dns-8766.svc] Mar 8 17:47:50.681: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.685: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.689: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.692: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.695: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.698: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.726: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.729: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.732: INFO: Unable to read jessie_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.735: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.738: INFO: Unable to read jessie_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.740: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:50.764: INFO: Lookups using dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8766 wheezy_tcp@dns-test-service.dns-8766 wheezy_udp@dns-test-service.dns-8766.svc wheezy_tcp@dns-test-service.dns-8766.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8766 jessie_tcp@dns-test-service.dns-8766 jessie_udp@dns-test-service.dns-8766.svc jessie_tcp@dns-test-service.dns-8766.svc] Mar 8 17:47:55.680: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.684: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.692: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.695: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.698: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.727: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.730: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.733: INFO: Unable to read jessie_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.736: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.739: INFO: Unable to read jessie_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.743: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:47:55.771: INFO: Lookups using dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8766 wheezy_tcp@dns-test-service.dns-8766 wheezy_udp@dns-test-service.dns-8766.svc wheezy_tcp@dns-test-service.dns-8766.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8766 jessie_tcp@dns-test-service.dns-8766 jessie_udp@dns-test-service.dns-8766.svc jessie_tcp@dns-test-service.dns-8766.svc] Mar 8 17:48:00.681: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.684: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.691: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.694: INFO: Unable to read wheezy_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.724: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.727: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.730: INFO: Unable to read jessie_udp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766 from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.736: INFO: Unable to read jessie_udp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.738: INFO: Unable to read jessie_tcp@dns-test-service.dns-8766.svc from pod dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb: the server could not find the requested resource (get pods dns-test-d50e772f-ce06-479b-8664-911314dbc9cb) Mar 8 17:48:00.759: INFO: Lookups using dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8766 wheezy_tcp@dns-test-service.dns-8766 wheezy_udp@dns-test-service.dns-8766.svc wheezy_tcp@dns-test-service.dns-8766.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8766 jessie_tcp@dns-test-service.dns-8766 jessie_udp@dns-test-service.dns-8766.svc jessie_tcp@dns-test-service.dns-8766.svc] Mar 8 17:48:05.785: INFO: DNS probes using dns-8766/dns-test-d50e772f-ce06-479b-8664-911314dbc9cb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:05.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8766" for this suite. • [SLOW TEST:34.550 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":177,"skipped":3111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:05.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 8 17:48:06.050: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:20.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2346" for this suite. • [SLOW TEST:14.144 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":178,"skipped":3143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:20.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:48:20.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ba54f33-3223-4c5c-a377-a673a7c50c6d" in namespace "downward-api-7544" to be "Succeeded or Failed" Mar 8 17:48:20.262: INFO: Pod "downwardapi-volume-3ba54f33-3223-4c5c-a377-a673a7c50c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270122ms Mar 8 17:48:22.266: INFO: Pod "downwardapi-volume-3ba54f33-3223-4c5c-a377-a673a7c50c6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01223951s STEP: Saw pod success Mar 8 17:48:22.266: INFO: Pod "downwardapi-volume-3ba54f33-3223-4c5c-a377-a673a7c50c6d" satisfied condition "Succeeded or Failed" Mar 8 17:48:22.269: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3ba54f33-3223-4c5c-a377-a673a7c50c6d container client-container: STEP: delete the pod Mar 8 17:48:22.331: INFO: Waiting for pod downwardapi-volume-3ba54f33-3223-4c5c-a377-a673a7c50c6d to disappear Mar 8 17:48:22.337: INFO: Pod downwardapi-volume-3ba54f33-3223-4c5c-a377-a673a7c50c6d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:22.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7544" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3169,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:22.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 17:48:22.393: INFO: Waiting up to 5m0s for pod "pod-2debe1c1-4442-49f5-a932-daaaba0c34dc" in namespace "emptydir-9283" to be "Succeeded or Failed" Mar 8 17:48:22.397: INFO: Pod "pod-2debe1c1-4442-49f5-a932-daaaba0c34dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003532ms Mar 8 17:48:24.401: INFO: Pod "pod-2debe1c1-4442-49f5-a932-daaaba0c34dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008060171s STEP: Saw pod success Mar 8 17:48:24.401: INFO: Pod "pod-2debe1c1-4442-49f5-a932-daaaba0c34dc" satisfied condition "Succeeded or Failed" Mar 8 17:48:24.404: INFO: Trying to get logs from node latest-worker pod pod-2debe1c1-4442-49f5-a932-daaaba0c34dc container test-container: STEP: delete the pod Mar 8 17:48:24.422: INFO: Waiting for pod pod-2debe1c1-4442-49f5-a932-daaaba0c34dc to disappear Mar 8 17:48:24.450: INFO: Pod pod-2debe1c1-4442-49f5-a932-daaaba0c34dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:24.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9283" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:24.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 17:48:24.519: INFO: Waiting up to 5m0s for pod "pod-37febb3d-1a66-4989-9125-7d1dd371de99" in namespace "emptydir-6283" to be "Succeeded or Failed" Mar 8 17:48:24.523: INFO: Pod "pod-37febb3d-1a66-4989-9125-7d1dd371de99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.691157ms Mar 8 17:48:26.525: INFO: Pod "pod-37febb3d-1a66-4989-9125-7d1dd371de99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006562607s STEP: Saw pod success Mar 8 17:48:26.525: INFO: Pod "pod-37febb3d-1a66-4989-9125-7d1dd371de99" satisfied condition "Succeeded or Failed" Mar 8 17:48:26.549: INFO: Trying to get logs from node latest-worker pod pod-37febb3d-1a66-4989-9125-7d1dd371de99 container test-container: STEP: delete the pod Mar 8 17:48:26.601: INFO: Waiting for pod pod-37febb3d-1a66-4989-9125-7d1dd371de99 to disappear Mar 8 17:48:26.613: INFO: Pod pod-37febb3d-1a66-4989-9125-7d1dd371de99 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:26.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6283" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3237,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:26.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Mar 8 17:48:26.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config cluster-info' Mar 8 17:48:29.044: INFO: stderr: "" Mar 8 17:48:29.044: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:29.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7032" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":182,"skipped":3238,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:29.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2837.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2837.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2837.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2837.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2837.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2837.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:48:33.211: INFO: DNS probes using dns-2837/dns-test-e8441460-b2c4-4cb5-ba2c-72a9966dd3b0 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:33.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2837" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":183,"skipped":3258,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:33.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8e89c6aa-4544-4fae-a1e3-fa3baabb9d77 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8e89c6aa-4544-4fae-a1e3-fa3baabb9d77 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:37.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9749" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3269,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:37.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-1875/configmap-test-7afd9595-12a0-4b3c-8ab4-9de919abe151 STEP: Creating a pod to test consume configMaps Mar 8 17:48:37.531: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c2c5979-c22a-41af-b361-58327a352c35" in namespace "configmap-1875" to be "Succeeded or Failed" Mar 8 17:48:37.541: INFO: Pod "pod-configmaps-9c2c5979-c22a-41af-b361-58327a352c35": Phase="Pending", Reason="", readiness=false. Elapsed: 9.720153ms Mar 8 17:48:39.545: INFO: Pod "pod-configmaps-9c2c5979-c22a-41af-b361-58327a352c35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013508473s STEP: Saw pod success Mar 8 17:48:39.545: INFO: Pod "pod-configmaps-9c2c5979-c22a-41af-b361-58327a352c35" satisfied condition "Succeeded or Failed" Mar 8 17:48:39.547: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9c2c5979-c22a-41af-b361-58327a352c35 container env-test: STEP: delete the pod Mar 8 17:48:39.567: INFO: Waiting for pod pod-configmaps-9c2c5979-c22a-41af-b361-58327a352c35 to disappear Mar 8 17:48:39.571: INFO: Pod pod-configmaps-9c2c5979-c22a-41af-b361-58327a352c35 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:39.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1875" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:39.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 8 17:48:39.629: INFO: namespace kubectl-3793 Mar 8 17:48:39.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3793' Mar 8 17:48:39.999: INFO: stderr: "" Mar 8 17:48:39.999: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 17:48:41.004: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:48:41.004: INFO: Found 0 / 1 Mar 8 17:48:42.004: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:48:42.004: INFO: Found 0 / 1 Mar 8 17:48:43.004: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:48:43.004: INFO: Found 1 / 1 Mar 8 17:48:43.004: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 17:48:43.007: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 17:48:43.007: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 17:48:43.007: INFO: wait on agnhost-master startup in kubectl-3793 Mar 8 17:48:43.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs agnhost-master-qwxsx agnhost-master --namespace=kubectl-3793' Mar 8 17:48:43.121: INFO: stderr: "" Mar 8 17:48:43.121: INFO: stdout: "Paused\n" STEP: exposing RC Mar 8 17:48:43.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3793' Mar 8 17:48:43.264: INFO: stderr: "" Mar 8 17:48:43.264: INFO: stdout: "service/rm2 exposed\n" Mar 8 17:48:43.294: INFO: Service rm2 in namespace kubectl-3793 found. STEP: exposing service Mar 8 17:48:45.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3793' Mar 8 17:48:45.411: INFO: stderr: "" Mar 8 17:48:45.411: INFO: stdout: "service/rm3 exposed\n" Mar 8 17:48:45.418: INFO: Service rm3 in namespace kubectl-3793 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:47.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3793" for this suite. • [SLOW TEST:7.847 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1226 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":186,"skipped":3349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:47.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:47.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2601" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":187,"skipped":3380,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:47.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-c0e428f9-dfb1-4812-8198-414a9fc7d138 STEP: Creating a pod to test consume secrets Mar 8 17:48:47.642: INFO: Waiting up to 5m0s for pod "pod-secrets-8aa78e7c-8983-46d8-99ef-629d9b61e759" in namespace "secrets-8121" to be "Succeeded or Failed" Mar 8 17:48:47.696: INFO: Pod "pod-secrets-8aa78e7c-8983-46d8-99ef-629d9b61e759": Phase="Pending", Reason="", readiness=false. Elapsed: 54.770807ms Mar 8 17:48:49.700: INFO: Pod "pod-secrets-8aa78e7c-8983-46d8-99ef-629d9b61e759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058782494s STEP: Saw pod success Mar 8 17:48:49.700: INFO: Pod "pod-secrets-8aa78e7c-8983-46d8-99ef-629d9b61e759" satisfied condition "Succeeded or Failed" Mar 8 17:48:49.704: INFO: Trying to get logs from node latest-worker pod pod-secrets-8aa78e7c-8983-46d8-99ef-629d9b61e759 container secret-volume-test: STEP: delete the pod Mar 8 17:48:49.727: INFO: Waiting for pod pod-secrets-8aa78e7c-8983-46d8-99ef-629d9b61e759 to disappear Mar 8 17:48:49.730: INFO: Pod pod-secrets-8aa78e7c-8983-46d8-99ef-629d9b61e759 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:48:49.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8121" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3393,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:48:49.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8327, will wait for the garbage collector to delete the pods Mar 8 17:48:53.873: INFO: Deleting Job.batch foo took: 6.785046ms Mar 8 17:48:54.174: INFO: Terminating Job.batch foo pods took: 300.274925ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:49:32.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8327" for this suite. • [SLOW TEST:42.725 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":189,"skipped":3403,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:49:32.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:49:33.015: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:49:36.063: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:49:36.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9517" for this suite. STEP: Destroying namespace "webhook-9517-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":190,"skipped":3406,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:49:36.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:49:36.921: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:49:38.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286576, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286576, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286577, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719286576, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:49:41.960: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:49:42.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6277" for this suite. STEP: Destroying namespace "webhook-6277-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.708 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":191,"skipped":3424,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:49:42.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-61a47253-0c46-4d55-9743-29086212eb5f STEP: Creating a pod to test consume secrets Mar 8 17:49:42.379: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8191a6a1-9dac-471d-b6c9-c7ab8dfe88bb" in namespace "projected-4721" to be "Succeeded or Failed" Mar 8 17:49:42.381: INFO: Pod "pod-projected-secrets-8191a6a1-9dac-471d-b6c9-c7ab8dfe88bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544563ms Mar 8 17:49:44.385: INFO: Pod "pod-projected-secrets-8191a6a1-9dac-471d-b6c9-c7ab8dfe88bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006155425s STEP: Saw pod success Mar 8 17:49:44.385: INFO: Pod "pod-projected-secrets-8191a6a1-9dac-471d-b6c9-c7ab8dfe88bb" satisfied condition "Succeeded or Failed" Mar 8 17:49:44.388: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-8191a6a1-9dac-471d-b6c9-c7ab8dfe88bb container projected-secret-volume-test: STEP: delete the pod Mar 8 17:49:44.430: INFO: Waiting for pod pod-projected-secrets-8191a6a1-9dac-471d-b6c9-c7ab8dfe88bb to disappear Mar 8 17:49:44.441: INFO: Pod pod-projected-secrets-8191a6a1-9dac-471d-b6c9-c7ab8dfe88bb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:49:44.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4721" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3425,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:49:44.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:49:44.540: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 14.138987ms) Mar 8 17:49:44.543: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.094351ms) Mar 8 17:49:44.546: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.90231ms) Mar 8 17:49:44.548: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.515023ms) Mar 8 17:49:44.551: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.540538ms) Mar 8 17:49:44.553: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.25019ms) Mar 8 17:49:44.555: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.163305ms) Mar 8 17:49:44.557: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 1.922897ms) Mar 8 17:49:44.559: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 1.986909ms) Mar 8 17:49:44.562: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.877047ms) Mar 8 17:49:44.565: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.609962ms) Mar 8 17:49:44.567: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.364782ms) Mar 8 17:49:44.570: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.164551ms) Mar 8 17:49:44.576: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.436623ms) Mar 8 17:49:44.579: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.94693ms) Mar 8 17:49:44.581: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.088546ms) Mar 8 17:49:44.583: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 1.779244ms) Mar 8 17:49:44.585: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.271055ms) Mar 8 17:49:44.607: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 21.875837ms) Mar 8 17:49:44.610: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.528535ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:49:44.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5923" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":193,"skipped":3430,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:49:44.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:49:49.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8866" for this suite. • [SLOW TEST:5.459 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":194,"skipped":3439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:49:50.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-cecaf6ee-d3fd-417f-9da3-f317f6deb09d STEP: Creating a pod to test consume secrets Mar 8 17:49:50.142: INFO: Waiting up to 5m0s for pod "pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9" in namespace "secrets-7875" to be "Succeeded or Failed" Mar 8 17:49:50.155: INFO: Pod "pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.418595ms Mar 8 17:49:52.159: INFO: Pod "pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017483258s Mar 8 17:49:54.163: INFO: Pod "pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021360765s STEP: Saw pod success Mar 8 17:49:54.163: INFO: Pod "pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9" satisfied condition "Succeeded or Failed" Mar 8 17:49:54.166: INFO: Trying to get logs from node latest-worker pod pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9 container secret-volume-test: STEP: delete the pod Mar 8 17:49:54.200: INFO: Waiting for pod pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9 to disappear Mar 8 17:49:54.232: INFO: Pod pod-secrets-4a4c2c89-a34e-4e89-9e07-3c1d9e60a8c9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:49:54.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7875" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3471,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:49:54.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 17:50:00.374906 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 17:50:00.374: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:50:00.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7344" for this suite. • [SLOW TEST:6.139 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":196,"skipped":3489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:50:00.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 17:50:00.452: INFO: Waiting up to 5m0s for pod "pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e" in namespace "emptydir-9468" to be "Succeeded or Failed" Mar 8 17:50:00.466: INFO: Pod "pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.189524ms Mar 8 17:50:02.470: INFO: Pod "pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e": Phase="Running", Reason="", readiness=true. Elapsed: 2.017796999s Mar 8 17:50:04.473: INFO: Pod "pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021187179s STEP: Saw pod success Mar 8 17:50:04.474: INFO: Pod "pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e" satisfied condition "Succeeded or Failed" Mar 8 17:50:04.477: INFO: Trying to get logs from node latest-worker pod pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e container test-container: STEP: delete the pod Mar 8 17:50:04.498: INFO: Waiting for pod pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e to disappear Mar 8 17:50:04.502: INFO: Pod pod-033a21fc-8a16-40e9-b6c7-8ad35cc4cd1e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:50:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9468" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3519,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:50:04.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 17:50:08.260: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:50:08.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4808" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3524,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:50:08.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-cd8b2c00-359d-45de-ba96-3474feffbcd0 STEP: Creating a pod to test consume configMaps Mar 8 17:50:08.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5d792f32-ddd5-414a-aebb-7587114dac4b" in namespace "projected-1776" to be "Succeeded or Failed" Mar 8 17:50:08.601: INFO: Pod "pod-projected-configmaps-5d792f32-ddd5-414a-aebb-7587114dac4b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.294497ms Mar 8 17:50:10.604: INFO: Pod "pod-projected-configmaps-5d792f32-ddd5-414a-aebb-7587114dac4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037039607s STEP: Saw pod success Mar 8 17:50:10.604: INFO: Pod "pod-projected-configmaps-5d792f32-ddd5-414a-aebb-7587114dac4b" satisfied condition "Succeeded or Failed" Mar 8 17:50:10.606: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5d792f32-ddd5-414a-aebb-7587114dac4b container projected-configmap-volume-test: STEP: delete the pod Mar 8 17:50:10.642: INFO: Waiting for pod pod-projected-configmaps-5d792f32-ddd5-414a-aebb-7587114dac4b to disappear Mar 8 17:50:10.679: INFO: Pod pod-projected-configmaps-5d792f32-ddd5-414a-aebb-7587114dac4b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:50:10.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1776" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3531,"failed":0} ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:50:10.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 8 17:50:10.739: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60582 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:50:10.739: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60582 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 8 17:50:20.773: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60629 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:50:20.773: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60629 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 8 17:50:30.780: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60659 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:50:30.780: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60659 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 8 17:50:40.792: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60689 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:50:40.792: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-a 1657ac10-0340-4714-9b11-cdb4390d54ce 60689 0 2020-03-08 17:50:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 8 17:50:50.800: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-b 1e8c3ba6-8b4b-4461-acb3-f01bd069c385 60717 0 2020-03-08 17:50:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:50:50.800: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-b 1e8c3ba6-8b4b-4461-acb3-f01bd069c385 60717 0 2020-03-08 17:50:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 8 17:51:00.805: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-b 1e8c3ba6-8b4b-4461-acb3-f01bd069c385 60747 0 2020-03-08 17:50:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:51:00.805: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-353 /api/v1/namespaces/watch-353/configmaps/e2e-watch-test-configmap-b 1e8c3ba6-8b4b-4461-acb3-f01bd069c385 60747 0 2020-03-08 17:50:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:51:10.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-353" for this suite. • [SLOW TEST:60.131 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":200,"skipped":3531,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:51:10.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6664 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 8 17:51:10.902: INFO: Found 0 stateful pods, waiting for 3 Mar 8 17:51:20.907: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:51:20.907: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:51:20.907: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 8 17:51:20.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6664 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 17:51:21.159: INFO: stderr: "I0308 17:51:21.061964 2544 log.go:172] (0xc000a71290) (0xc000a76780) Create stream\nI0308 17:51:21.062007 2544 log.go:172] (0xc000a71290) (0xc000a76780) Stream added, broadcasting: 1\nI0308 17:51:21.065754 2544 log.go:172] (0xc000a71290) Reply frame received for 1\nI0308 17:51:21.065792 2544 log.go:172] (0xc000a71290) (0xc0007e3680) Create stream\nI0308 17:51:21.065806 2544 log.go:172] (0xc000a71290) (0xc0007e3680) Stream added, broadcasting: 3\nI0308 17:51:21.066711 2544 log.go:172] (0xc000a71290) Reply frame received for 3\nI0308 17:51:21.066746 2544 log.go:172] (0xc000a71290) (0xc000526aa0) Create stream\nI0308 17:51:21.066765 2544 log.go:172] (0xc000a71290) (0xc000526aa0) Stream added, broadcasting: 5\nI0308 17:51:21.067526 2544 log.go:172] (0xc000a71290) Reply frame received for 5\nI0308 17:51:21.119727 2544 log.go:172] (0xc000a71290) Data frame received for 5\nI0308 17:51:21.119747 2544 log.go:172] (0xc000526aa0) (5) Data frame handling\nI0308 17:51:21.119763 2544 log.go:172] (0xc000526aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 17:51:21.154258 2544 log.go:172] (0xc000a71290) Data frame received for 3\nI0308 17:51:21.154278 2544 log.go:172] (0xc0007e3680) (3) Data frame handling\nI0308 17:51:21.154292 2544 log.go:172] (0xc0007e3680) (3) Data frame sent\nI0308 17:51:21.154440 2544 log.go:172] (0xc000a71290) Data frame received for 5\nI0308 17:51:21.154478 2544 log.go:172] (0xc000526aa0) (5) Data frame handling\nI0308 17:51:21.154499 2544 log.go:172] (0xc000a71290) Data frame received for 3\nI0308 17:51:21.154512 2544 log.go:172] (0xc0007e3680) (3) Data frame handling\nI0308 17:51:21.155809 2544 log.go:172] (0xc000a71290) Data frame received for 1\nI0308 17:51:21.155831 2544 log.go:172] (0xc000a76780) (1) Data frame handling\nI0308 17:51:21.155857 2544 log.go:172] (0xc000a76780) (1) Data frame sent\nI0308 17:51:21.155883 2544 log.go:172] (0xc000a71290) (0xc000a76780) Stream removed, broadcasting: 1\nI0308 17:51:21.155913 2544 log.go:172] (0xc000a71290) Go away received\nI0308 17:51:21.156204 2544 log.go:172] (0xc000a71290) (0xc000a76780) Stream removed, broadcasting: 1\nI0308 17:51:21.156222 2544 log.go:172] (0xc000a71290) (0xc0007e3680) Stream removed, broadcasting: 3\nI0308 17:51:21.156230 2544 log.go:172] (0xc000a71290) (0xc000526aa0) Stream removed, broadcasting: 5\n" Mar 8 17:51:21.159: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 17:51:21.159: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 17:51:31.192: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 8 17:51:41.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6664 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 17:51:41.454: INFO: stderr: "I0308 17:51:41.393086 2565 log.go:172] (0xc000802370) (0xc000a98820) Create stream\nI0308 17:51:41.393133 2565 log.go:172] (0xc000802370) (0xc000a98820) Stream added, broadcasting: 1\nI0308 17:51:41.397667 2565 log.go:172] (0xc000802370) Reply frame received for 1\nI0308 17:51:41.397702 2565 log.go:172] (0xc000802370) (0xc0005fb5e0) Create stream\nI0308 17:51:41.397714 2565 log.go:172] (0xc000802370) (0xc0005fb5e0) Stream added, broadcasting: 3\nI0308 17:51:41.398817 2565 log.go:172] (0xc000802370) Reply frame received for 3\nI0308 17:51:41.398859 2565 log.go:172] (0xc000802370) (0xc0004c0a00) Create stream\nI0308 17:51:41.398872 2565 log.go:172] (0xc000802370) (0xc0004c0a00) Stream added, broadcasting: 5\nI0308 17:51:41.399766 2565 log.go:172] (0xc000802370) Reply frame received for 5\nI0308 17:51:41.448639 2565 log.go:172] (0xc000802370) Data frame received for 3\nI0308 17:51:41.448664 2565 log.go:172] (0xc0005fb5e0) (3) Data frame handling\nI0308 17:51:41.448674 2565 log.go:172] (0xc0005fb5e0) (3) Data frame sent\nI0308 17:51:41.448680 2565 log.go:172] (0xc000802370) Data frame received for 3\nI0308 17:51:41.448688 2565 log.go:172] (0xc0005fb5e0) (3) Data frame handling\nI0308 17:51:41.448713 2565 log.go:172] (0xc000802370) Data frame received for 5\nI0308 17:51:41.448724 2565 log.go:172] (0xc0004c0a00) (5) Data frame handling\nI0308 17:51:41.448732 2565 log.go:172] (0xc0004c0a00) (5) Data frame sent\nI0308 17:51:41.448738 2565 log.go:172] (0xc000802370) Data frame received for 5\nI0308 17:51:41.448744 2565 log.go:172] (0xc0004c0a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 17:51:41.450194 2565 log.go:172] (0xc000802370) Data frame received for 1\nI0308 17:51:41.450216 2565 log.go:172] (0xc000a98820) (1) Data frame handling\nI0308 17:51:41.450225 2565 log.go:172] (0xc000a98820) (1) Data frame sent\nI0308 17:51:41.450235 2565 log.go:172] (0xc000802370) (0xc000a98820) Stream removed, broadcasting: 1\nI0308 17:51:41.450532 2565 log.go:172] (0xc000802370) (0xc000a98820) Stream removed, broadcasting: 1\nI0308 17:51:41.450547 2565 log.go:172] (0xc000802370) (0xc0005fb5e0) Stream removed, broadcasting: 3\nI0308 17:51:41.450712 2565 log.go:172] (0xc000802370) (0xc0004c0a00) Stream removed, broadcasting: 5\n" Mar 8 17:51:41.454: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 17:51:41.454: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 17:51:51.475: INFO: Waiting for StatefulSet statefulset-6664/ss2 to complete update Mar 8 17:51:51.475: INFO: Waiting for Pod statefulset-6664/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 17:51:51.475: INFO: Waiting for Pod statefulset-6664/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 17:51:51.475: INFO: Waiting for Pod statefulset-6664/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 17:52:01.483: INFO: Waiting for StatefulSet statefulset-6664/ss2 to complete update Mar 8 17:52:01.483: INFO: Waiting for Pod statefulset-6664/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 17:52:11.482: INFO: Waiting for StatefulSet statefulset-6664/ss2 to complete update Mar 8 17:52:11.482: INFO: Waiting for Pod statefulset-6664/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 8 17:52:21.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6664 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 17:52:21.696: INFO: stderr: "I0308 17:52:21.614078 2584 log.go:172] (0xc00003ae70) (0xc00076e280) Create stream\nI0308 17:52:21.614140 2584 log.go:172] (0xc00003ae70) (0xc00076e280) Stream added, broadcasting: 1\nI0308 17:52:21.616374 2584 log.go:172] (0xc00003ae70) Reply frame received for 1\nI0308 17:52:21.616459 2584 log.go:172] (0xc00003ae70) (0xc00053b900) Create stream\nI0308 17:52:21.616487 2584 log.go:172] (0xc00003ae70) (0xc00053b900) Stream added, broadcasting: 3\nI0308 17:52:21.617264 2584 log.go:172] (0xc00003ae70) Reply frame received for 3\nI0308 17:52:21.617285 2584 log.go:172] (0xc00003ae70) (0xc00060f220) Create stream\nI0308 17:52:21.617293 2584 log.go:172] (0xc00003ae70) (0xc00060f220) Stream added, broadcasting: 5\nI0308 17:52:21.617889 2584 log.go:172] (0xc00003ae70) Reply frame received for 5\nI0308 17:52:21.669186 2584 log.go:172] (0xc00003ae70) Data frame received for 5\nI0308 17:52:21.669216 2584 log.go:172] (0xc00060f220) (5) Data frame handling\nI0308 17:52:21.669235 2584 log.go:172] (0xc00060f220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 17:52:21.690150 2584 log.go:172] (0xc00003ae70) Data frame received for 3\nI0308 17:52:21.690172 2584 log.go:172] (0xc00053b900) (3) Data frame handling\nI0308 17:52:21.690188 2584 log.go:172] (0xc00053b900) (3) Data frame sent\nI0308 17:52:21.690355 2584 log.go:172] (0xc00003ae70) Data frame received for 5\nI0308 17:52:21.690384 2584 log.go:172] (0xc00003ae70) Data frame received for 3\nI0308 17:52:21.690403 2584 log.go:172] (0xc00053b900) (3) Data frame handling\nI0308 17:52:21.690421 2584 log.go:172] (0xc00060f220) (5) Data frame handling\nI0308 17:52:21.691817 2584 log.go:172] (0xc00003ae70) Data frame received for 1\nI0308 17:52:21.691830 2584 log.go:172] (0xc00076e280) (1) Data frame handling\nI0308 17:52:21.691838 2584 log.go:172] (0xc00076e280) (1) Data frame sent\nI0308 17:52:21.691846 2584 log.go:172] (0xc00003ae70) (0xc00076e280) Stream removed, broadcasting: 1\nI0308 17:52:21.691860 2584 log.go:172] (0xc00003ae70) Go away received\nI0308 17:52:21.692093 2584 log.go:172] (0xc00003ae70) (0xc00076e280) Stream removed, broadcasting: 1\nI0308 17:52:21.692110 2584 log.go:172] (0xc00003ae70) (0xc00053b900) Stream removed, broadcasting: 3\nI0308 17:52:21.692117 2584 log.go:172] (0xc00003ae70) (0xc00060f220) Stream removed, broadcasting: 5\n" Mar 8 17:52:21.696: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 17:52:21.696: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 17:52:31.729: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 8 17:52:41.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6664 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 17:52:41.970: INFO: stderr: "I0308 17:52:41.912713 2604 log.go:172] (0xc0008ea000) (0xc00052caa0) Create stream\nI0308 17:52:41.912770 2604 log.go:172] (0xc0008ea000) (0xc00052caa0) Stream added, broadcasting: 1\nI0308 17:52:41.914720 2604 log.go:172] (0xc0008ea000) Reply frame received for 1\nI0308 17:52:41.914773 2604 log.go:172] (0xc0008ea000) (0xc000578000) Create stream\nI0308 17:52:41.914797 2604 log.go:172] (0xc0008ea000) (0xc000578000) Stream added, broadcasting: 3\nI0308 17:52:41.915722 2604 log.go:172] (0xc0008ea000) Reply frame received for 3\nI0308 17:52:41.915765 2604 log.go:172] (0xc0008ea000) (0xc00080b2c0) Create stream\nI0308 17:52:41.915782 2604 log.go:172] (0xc0008ea000) (0xc00080b2c0) Stream added, broadcasting: 5\nI0308 17:52:41.916733 2604 log.go:172] (0xc0008ea000) Reply frame received for 5\nI0308 17:52:41.964988 2604 log.go:172] (0xc0008ea000) Data frame received for 3\nI0308 17:52:41.965014 2604 log.go:172] (0xc000578000) (3) Data frame handling\nI0308 17:52:41.965030 2604 log.go:172] (0xc000578000) (3) Data frame sent\nI0308 17:52:41.965038 2604 log.go:172] (0xc0008ea000) Data frame received for 3\nI0308 17:52:41.965045 2604 log.go:172] (0xc000578000) (3) Data frame handling\nI0308 17:52:41.965326 2604 log.go:172] (0xc0008ea000) Data frame received for 5\nI0308 17:52:41.965347 2604 log.go:172] (0xc00080b2c0) (5) Data frame handling\nI0308 17:52:41.965362 2604 log.go:172] (0xc00080b2c0) (5) Data frame sent\nI0308 17:52:41.965373 2604 log.go:172] (0xc0008ea000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 17:52:41.965379 2604 log.go:172] (0xc00080b2c0) (5) Data frame handling\nI0308 17:52:41.966698 2604 log.go:172] (0xc0008ea000) Data frame received for 1\nI0308 17:52:41.966717 2604 log.go:172] (0xc00052caa0) (1) Data frame handling\nI0308 17:52:41.966739 2604 log.go:172] (0xc00052caa0) (1) Data frame sent\nI0308 17:52:41.966756 2604 log.go:172] (0xc0008ea000) (0xc00052caa0) Stream removed, broadcasting: 1\nI0308 17:52:41.966770 2604 log.go:172] (0xc0008ea000) Go away received\nI0308 17:52:41.967144 2604 log.go:172] (0xc0008ea000) (0xc00052caa0) Stream removed, broadcasting: 1\nI0308 17:52:41.967169 2604 log.go:172] (0xc0008ea000) (0xc000578000) Stream removed, broadcasting: 3\nI0308 17:52:41.967182 2604 log.go:172] (0xc0008ea000) (0xc00080b2c0) Stream removed, broadcasting: 5\n" Mar 8 17:52:41.970: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 17:52:41.970: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 17:52:51.992: INFO: Waiting for StatefulSet statefulset-6664/ss2 to complete update Mar 8 17:52:51.992: INFO: Waiting for Pod statefulset-6664/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 8 17:53:02.001: INFO: Waiting for StatefulSet statefulset-6664/ss2 to complete update Mar 8 17:53:02.001: INFO: Waiting for Pod statefulset-6664/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 17:53:12.000: INFO: Deleting all statefulset in ns statefulset-6664 Mar 8 17:53:12.003: INFO: Scaling statefulset ss2 to 0 Mar 8 17:53:52.076: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 17:53:52.079: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:53:52.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6664" for this suite. • [SLOW TEST:161.287 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":201,"skipped":3564,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:53:52.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-0980e89c-28ea-41a3-b22a-710306fadb89 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:53:52.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9158" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":202,"skipped":3568,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:53:52.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:53:52.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50c3d604-370f-4438-93a0-2cbb0000b85f" in namespace "projected-6811" to be "Succeeded or Failed" Mar 8 17:53:52.297: INFO: Pod "downwardapi-volume-50c3d604-370f-4438-93a0-2cbb0000b85f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.288501ms Mar 8 17:53:54.300: INFO: Pod "downwardapi-volume-50c3d604-370f-4438-93a0-2cbb0000b85f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010296548s STEP: Saw pod success Mar 8 17:53:54.300: INFO: Pod "downwardapi-volume-50c3d604-370f-4438-93a0-2cbb0000b85f" satisfied condition "Succeeded or Failed" Mar 8 17:53:54.301: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-50c3d604-370f-4438-93a0-2cbb0000b85f container client-container: STEP: delete the pod Mar 8 17:53:54.327: INFO: Waiting for pod downwardapi-volume-50c3d604-370f-4438-93a0-2cbb0000b85f to disappear Mar 8 17:53:54.338: INFO: Pod downwardapi-volume-50c3d604-370f-4438-93a0-2cbb0000b85f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:53:54.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6811" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3568,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:53:54.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a27ca544-e194-4a75-8c8a-db9053c57529 STEP: Creating a pod to test consume configMaps Mar 8 17:53:54.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-d693639e-4a54-4ed6-8e86-b4572bc0165f" in namespace "configmap-5589" to be "Succeeded or Failed" Mar 8 17:53:54.436: INFO: Pod "pod-configmaps-d693639e-4a54-4ed6-8e86-b4572bc0165f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.858698ms Mar 8 17:53:56.440: INFO: Pod "pod-configmaps-d693639e-4a54-4ed6-8e86-b4572bc0165f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020143097s STEP: Saw pod success Mar 8 17:53:56.440: INFO: Pod "pod-configmaps-d693639e-4a54-4ed6-8e86-b4572bc0165f" satisfied condition "Succeeded or Failed" Mar 8 17:53:56.444: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d693639e-4a54-4ed6-8e86-b4572bc0165f container configmap-volume-test: STEP: delete the pod Mar 8 17:53:56.466: INFO: Waiting for pod pod-configmaps-d693639e-4a54-4ed6-8e86-b4572bc0165f to disappear Mar 8 17:53:56.475: INFO: Pod pod-configmaps-d693639e-4a54-4ed6-8e86-b4572bc0165f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:53:56.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5589" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:53:56.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:53:56.561: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:53:57.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2945" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":205,"skipped":3636,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:53:57.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:53:57.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-786cdf6d-9865-41a6-b5e0-76e309080298" in namespace "projected-1980" to be "Succeeded or Failed" Mar 8 17:53:57.733: INFO: Pod "downwardapi-volume-786cdf6d-9865-41a6-b5e0-76e309080298": Phase="Pending", Reason="", readiness=false. Elapsed: 9.928694ms Mar 8 17:53:59.773: INFO: Pod "downwardapi-volume-786cdf6d-9865-41a6-b5e0-76e309080298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.049189009s STEP: Saw pod success Mar 8 17:53:59.773: INFO: Pod "downwardapi-volume-786cdf6d-9865-41a6-b5e0-76e309080298" satisfied condition "Succeeded or Failed" Mar 8 17:53:59.775: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-786cdf6d-9865-41a6-b5e0-76e309080298 container client-container: STEP: delete the pod Mar 8 17:53:59.795: INFO: Waiting for pod downwardapi-volume-786cdf6d-9865-41a6-b5e0-76e309080298 to disappear Mar 8 17:53:59.799: INFO: Pod downwardapi-volume-786cdf6d-9865-41a6-b5e0-76e309080298 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:53:59.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1980" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3645,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:53:59.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:53:59.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7953" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":207,"skipped":3663,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:53:59.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 8 17:54:04.053: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9119 PodName:pod-sharedvolume-3a9546e0-1d26-476c-aa55-b4eec0183b6b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:54:04.053: INFO: >>> kubeConfig: /root/.kube/config I0308 17:54:04.091410 7 log.go:172] (0xc002cb71e0) (0xc0015dae60) Create stream I0308 17:54:04.091440 7 log.go:172] (0xc002cb71e0) (0xc0015dae60) Stream added, broadcasting: 1 I0308 17:54:04.094276 7 log.go:172] (0xc002cb71e0) Reply frame received for 1 I0308 17:54:04.094328 7 log.go:172] (0xc002cb71e0) (0xc0015dafa0) Create stream I0308 17:54:04.094345 7 log.go:172] (0xc002cb71e0) (0xc0015dafa0) Stream added, broadcasting: 3 I0308 17:54:04.095427 7 log.go:172] (0xc002cb71e0) Reply frame received for 3 I0308 17:54:04.095468 7 log.go:172] (0xc002cb71e0) (0xc0015db040) Create stream I0308 17:54:04.095488 7 log.go:172] (0xc002cb71e0) (0xc0015db040) Stream added, broadcasting: 5 I0308 17:54:04.096589 7 log.go:172] (0xc002cb71e0) Reply frame received for 5 I0308 17:54:04.149588 7 log.go:172] (0xc002cb71e0) Data frame received for 5 I0308 17:54:04.149621 7 log.go:172] (0xc002cb71e0) Data frame received for 3 I0308 17:54:04.149655 7 log.go:172] (0xc0015dafa0) (3) Data frame handling I0308 17:54:04.149671 7 log.go:172] (0xc0015dafa0) (3) Data frame sent I0308 17:54:04.149685 7 log.go:172] (0xc002cb71e0) Data frame received for 3 I0308 17:54:04.149698 7 log.go:172] (0xc0015dafa0) (3) Data frame handling I0308 17:54:04.149724 7 log.go:172] (0xc0015db040) (5) Data frame handling I0308 17:54:04.151232 7 log.go:172] (0xc002cb71e0) Data frame received for 1 I0308 17:54:04.151257 7 log.go:172] (0xc0015dae60) (1) Data frame handling I0308 17:54:04.151301 7 log.go:172] (0xc0015dae60) (1) Data frame sent I0308 17:54:04.151324 7 log.go:172] (0xc002cb71e0) (0xc0015dae60) Stream removed, broadcasting: 1 I0308 17:54:04.151345 7 log.go:172] (0xc002cb71e0) Go away received I0308 17:54:04.151534 7 log.go:172] (0xc002cb71e0) (0xc0015dae60) Stream removed, broadcasting: 1 I0308 17:54:04.151563 7 log.go:172] (0xc002cb71e0) (0xc0015dafa0) Stream removed, broadcasting: 3 I0308 17:54:04.151593 7 log.go:172] (0xc002cb71e0) (0xc0015db040) Stream removed, broadcasting: 5 Mar 8 17:54:04.151: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:54:04.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9119" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":208,"skipped":3669,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:54:04.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 8 17:54:04.209: INFO: Waiting up to 5m0s for pod "downward-api-d2d32a8d-a781-4c91-a21d-4e03848b44e7" in namespace "downward-api-5473" to be "Succeeded or Failed" Mar 8 17:54:04.213: INFO: Pod "downward-api-d2d32a8d-a781-4c91-a21d-4e03848b44e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067743ms Mar 8 17:54:06.217: INFO: Pod "downward-api-d2d32a8d-a781-4c91-a21d-4e03848b44e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007875153s STEP: Saw pod success Mar 8 17:54:06.217: INFO: Pod "downward-api-d2d32a8d-a781-4c91-a21d-4e03848b44e7" satisfied condition "Succeeded or Failed" Mar 8 17:54:06.219: INFO: Trying to get logs from node latest-worker2 pod downward-api-d2d32a8d-a781-4c91-a21d-4e03848b44e7 container dapi-container: STEP: delete the pod Mar 8 17:54:06.249: INFO: Waiting for pod downward-api-d2d32a8d-a781-4c91-a21d-4e03848b44e7 to disappear Mar 8 17:54:06.254: INFO: Pod downward-api-d2d32a8d-a781-4c91-a21d-4e03848b44e7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:54:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5473" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3684,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:54:06.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-lvqh STEP: Creating a pod to test atomic-volume-subpath Mar 8 17:54:06.366: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lvqh" in namespace "subpath-9080" to be "Succeeded or Failed" Mar 8 17:54:06.389: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Pending", Reason="", readiness=false. Elapsed: 22.905375ms Mar 8 17:54:08.393: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 2.026468098s Mar 8 17:54:10.397: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 4.030271733s Mar 8 17:54:12.400: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 6.033693986s Mar 8 17:54:14.404: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 8.037745088s Mar 8 17:54:16.408: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 10.041290955s Mar 8 17:54:18.412: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 12.045178289s Mar 8 17:54:20.423: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 14.056631937s Mar 8 17:54:22.427: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 16.060716819s Mar 8 17:54:24.431: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 18.064692618s Mar 8 17:54:26.435: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Running", Reason="", readiness=true. Elapsed: 20.068304962s Mar 8 17:54:28.439: INFO: Pod "pod-subpath-test-configmap-lvqh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.07209379s STEP: Saw pod success Mar 8 17:54:28.439: INFO: Pod "pod-subpath-test-configmap-lvqh" satisfied condition "Succeeded or Failed" Mar 8 17:54:28.441: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-lvqh container test-container-subpath-configmap-lvqh: STEP: delete the pod Mar 8 17:54:28.460: INFO: Waiting for pod pod-subpath-test-configmap-lvqh to disappear Mar 8 17:54:28.465: INFO: Pod pod-subpath-test-configmap-lvqh no longer exists STEP: Deleting pod pod-subpath-test-configmap-lvqh Mar 8 17:54:28.465: INFO: Deleting pod "pod-subpath-test-configmap-lvqh" in namespace "subpath-9080" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:54:28.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9080" for this suite. • [SLOW TEST:22.209 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":210,"skipped":3689,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:54:28.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7616 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7616 I0308 17:54:28.678237 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7616, replica count: 2 I0308 17:54:31.728657 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 17:54:31.728: INFO: Creating new exec pod Mar 8 17:54:36.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7616 execpod5zzf9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 17:54:36.965: INFO: stderr: "I0308 17:54:36.894912 2623 log.go:172] (0xc00003a0b0) (0xc000a9e000) Create stream\nI0308 17:54:36.894969 2623 log.go:172] (0xc00003a0b0) (0xc000a9e000) Stream added, broadcasting: 1\nI0308 17:54:36.897146 2623 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0308 17:54:36.897195 2623 log.go:172] (0xc00003a0b0) (0xc0007c52c0) Create stream\nI0308 17:54:36.897209 2623 log.go:172] (0xc00003a0b0) (0xc0007c52c0) Stream added, broadcasting: 3\nI0308 17:54:36.898232 2623 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0308 17:54:36.898264 2623 log.go:172] (0xc00003a0b0) (0xc0007c5360) Create stream\nI0308 17:54:36.898273 2623 log.go:172] (0xc00003a0b0) (0xc0007c5360) Stream added, broadcasting: 5\nI0308 17:54:36.899073 2623 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0308 17:54:36.958079 2623 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 17:54:36.958103 2623 log.go:172] (0xc0007c5360) (5) Data frame handling\nI0308 17:54:36.958146 2623 log.go:172] (0xc0007c5360) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 17:54:36.959060 2623 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 17:54:36.959078 2623 log.go:172] (0xc0007c5360) (5) Data frame handling\nI0308 17:54:36.959092 2623 log.go:172] (0xc0007c5360) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 17:54:36.959551 2623 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0308 17:54:36.959571 2623 log.go:172] (0xc0007c52c0) (3) Data frame handling\nI0308 17:54:36.959844 2623 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0308 17:54:36.959861 2623 log.go:172] (0xc0007c5360) (5) Data frame handling\nI0308 17:54:36.961490 2623 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0308 17:54:36.961515 2623 log.go:172] (0xc000a9e000) (1) Data frame handling\nI0308 17:54:36.961525 2623 log.go:172] (0xc000a9e000) (1) Data frame sent\nI0308 17:54:36.961536 2623 log.go:172] (0xc00003a0b0) (0xc000a9e000) Stream removed, broadcasting: 1\nI0308 17:54:36.961770 2623 log.go:172] (0xc00003a0b0) Go away received\nI0308 17:54:36.961821 2623 log.go:172] (0xc00003a0b0) (0xc000a9e000) Stream removed, broadcasting: 1\nI0308 17:54:36.961838 2623 log.go:172] (0xc00003a0b0) (0xc0007c52c0) Stream removed, broadcasting: 3\nI0308 17:54:36.961846 2623 log.go:172] (0xc00003a0b0) (0xc0007c5360) Stream removed, broadcasting: 5\n" Mar 8 17:54:36.965: INFO: stdout: "" Mar 8 17:54:36.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7616 execpod5zzf9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.207.222 80' Mar 8 17:54:37.184: INFO: stderr: "I0308 17:54:37.106841 2643 log.go:172] (0xc0009b0000) (0xc0006bb180) Create stream\nI0308 17:54:37.106907 2643 log.go:172] (0xc0009b0000) (0xc0006bb180) Stream added, broadcasting: 1\nI0308 17:54:37.109737 2643 log.go:172] (0xc0009b0000) Reply frame received for 1\nI0308 17:54:37.109803 2643 log.go:172] (0xc0009b0000) (0xc0009dc000) Create stream\nI0308 17:54:37.109828 2643 log.go:172] (0xc0009b0000) (0xc0009dc000) Stream added, broadcasting: 3\nI0308 17:54:37.110835 2643 log.go:172] (0xc0009b0000) Reply frame received for 3\nI0308 17:54:37.110870 2643 log.go:172] (0xc0009b0000) (0xc0009dc0a0) Create stream\nI0308 17:54:37.110880 2643 log.go:172] (0xc0009b0000) (0xc0009dc0a0) Stream added, broadcasting: 5\nI0308 17:54:37.111671 2643 log.go:172] (0xc0009b0000) Reply frame received for 5\nI0308 17:54:37.177980 2643 log.go:172] (0xc0009b0000) Data frame received for 5\nI0308 17:54:37.178011 2643 log.go:172] (0xc0009dc0a0) (5) Data frame handling\nI0308 17:54:37.178030 2643 log.go:172] (0xc0009dc0a0) (5) Data frame sent\nI0308 17:54:37.178039 2643 log.go:172] (0xc0009b0000) Data frame received for 5\nI0308 17:54:37.178046 2643 log.go:172] (0xc0009dc0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.207.222 80\nConnection to 10.96.207.222 80 port [tcp/http] succeeded!\nI0308 17:54:37.178107 2643 log.go:172] (0xc0009b0000) Data frame received for 3\nI0308 17:54:37.178151 2643 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0308 17:54:37.179918 2643 log.go:172] (0xc0009b0000) Data frame received for 1\nI0308 17:54:37.179946 2643 log.go:172] (0xc0006bb180) (1) Data frame handling\nI0308 17:54:37.179959 2643 log.go:172] (0xc0006bb180) (1) Data frame sent\nI0308 17:54:37.179975 2643 log.go:172] (0xc0009b0000) (0xc0006bb180) Stream removed, broadcasting: 1\nI0308 17:54:37.179996 2643 log.go:172] (0xc0009b0000) Go away received\nI0308 17:54:37.180373 2643 log.go:172] (0xc0009b0000) (0xc0006bb180) Stream removed, broadcasting: 1\nI0308 17:54:37.180395 2643 log.go:172] (0xc0009b0000) (0xc0009dc000) Stream removed, broadcasting: 3\nI0308 17:54:37.180405 2643 log.go:172] (0xc0009b0000) (0xc0009dc0a0) Stream removed, broadcasting: 5\n" Mar 8 17:54:37.184: INFO: stdout: "" Mar 8 17:54:37.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7616 execpod5zzf9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31748' Mar 8 17:54:37.393: INFO: stderr: "I0308 17:54:37.314726 2663 log.go:172] (0xc000979600) (0xc000a12780) Create stream\nI0308 17:54:37.314771 2663 log.go:172] (0xc000979600) (0xc000a12780) Stream added, broadcasting: 1\nI0308 17:54:37.318374 2663 log.go:172] (0xc000979600) Reply frame received for 1\nI0308 17:54:37.318424 2663 log.go:172] (0xc000979600) (0xc0007d5720) Create stream\nI0308 17:54:37.318439 2663 log.go:172] (0xc000979600) (0xc0007d5720) Stream added, broadcasting: 3\nI0308 17:54:37.319289 2663 log.go:172] (0xc000979600) Reply frame received for 3\nI0308 17:54:37.319319 2663 log.go:172] (0xc000979600) (0xc000552b40) Create stream\nI0308 17:54:37.319329 2663 log.go:172] (0xc000979600) (0xc000552b40) Stream added, broadcasting: 5\nI0308 17:54:37.320175 2663 log.go:172] (0xc000979600) Reply frame received for 5\nI0308 17:54:37.388566 2663 log.go:172] (0xc000979600) Data frame received for 5\nI0308 17:54:37.388608 2663 log.go:172] (0xc000552b40) (5) Data frame handling\nI0308 17:54:37.388622 2663 log.go:172] (0xc000552b40) (5) Data frame sent\nI0308 17:54:37.388629 2663 log.go:172] (0xc000979600) Data frame received for 5\nI0308 17:54:37.388635 2663 log.go:172] (0xc000552b40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.16 31748\nConnection to 172.17.0.16 31748 port [tcp/31748] succeeded!\nI0308 17:54:37.388653 2663 log.go:172] (0xc000979600) Data frame received for 3\nI0308 17:54:37.388659 2663 log.go:172] (0xc0007d5720) (3) Data frame handling\nI0308 17:54:37.389882 2663 log.go:172] (0xc000979600) Data frame received for 1\nI0308 17:54:37.389903 2663 log.go:172] (0xc000a12780) (1) Data frame handling\nI0308 17:54:37.389912 2663 log.go:172] (0xc000a12780) (1) Data frame sent\nI0308 17:54:37.389923 2663 log.go:172] (0xc000979600) (0xc000a12780) Stream removed, broadcasting: 1\nI0308 17:54:37.389943 2663 log.go:172] (0xc000979600) Go away received\nI0308 17:54:37.390240 2663 log.go:172] (0xc000979600) (0xc000a12780) Stream removed, broadcasting: 1\nI0308 17:54:37.390257 2663 log.go:172] (0xc000979600) (0xc0007d5720) Stream removed, broadcasting: 3\nI0308 17:54:37.390264 2663 log.go:172] (0xc000979600) (0xc000552b40) Stream removed, broadcasting: 5\n" Mar 8 17:54:37.393: INFO: stdout: "" Mar 8 17:54:37.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-7616 execpod5zzf9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31748' Mar 8 17:54:37.584: INFO: stderr: "I0308 17:54:37.521121 2685 log.go:172] (0xc000c0af20) (0xc000a46640) Create stream\nI0308 17:54:37.521176 2685 log.go:172] (0xc000c0af20) (0xc000a46640) Stream added, broadcasting: 1\nI0308 17:54:37.525282 2685 log.go:172] (0xc000c0af20) Reply frame received for 1\nI0308 17:54:37.525316 2685 log.go:172] (0xc000c0af20) (0xc0006e3860) Create stream\nI0308 17:54:37.525322 2685 log.go:172] (0xc000c0af20) (0xc0006e3860) Stream added, broadcasting: 3\nI0308 17:54:37.526094 2685 log.go:172] (0xc000c0af20) Reply frame received for 3\nI0308 17:54:37.526157 2685 log.go:172] (0xc000c0af20) (0xc000530c80) Create stream\nI0308 17:54:37.526171 2685 log.go:172] (0xc000c0af20) (0xc000530c80) Stream added, broadcasting: 5\nI0308 17:54:37.526962 2685 log.go:172] (0xc000c0af20) Reply frame received for 5\nI0308 17:54:37.580017 2685 log.go:172] (0xc000c0af20) Data frame received for 5\nI0308 17:54:37.580039 2685 log.go:172] (0xc000530c80) (5) Data frame handling\nI0308 17:54:37.580048 2685 log.go:172] (0xc000530c80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 31748\nConnection to 172.17.0.18 31748 port [tcp/31748] succeeded!\nI0308 17:54:37.580155 2685 log.go:172] (0xc000c0af20) Data frame received for 3\nI0308 17:54:37.580167 2685 log.go:172] (0xc0006e3860) (3) Data frame handling\nI0308 17:54:37.580203 2685 log.go:172] (0xc000c0af20) Data frame received for 5\nI0308 17:54:37.580232 2685 log.go:172] (0xc000530c80) (5) Data frame handling\nI0308 17:54:37.581145 2685 log.go:172] (0xc000c0af20) Data frame received for 1\nI0308 17:54:37.581159 2685 log.go:172] (0xc000a46640) (1) Data frame handling\nI0308 17:54:37.581175 2685 log.go:172] (0xc000a46640) (1) Data frame sent\nI0308 17:54:37.581329 2685 log.go:172] (0xc000c0af20) (0xc000a46640) Stream removed, broadcasting: 1\nI0308 17:54:37.581380 2685 log.go:172] (0xc000c0af20) Go away received\nI0308 17:54:37.581583 2685 log.go:172] (0xc000c0af20) (0xc000a46640) Stream removed, broadcasting: 1\nI0308 17:54:37.581595 2685 log.go:172] (0xc000c0af20) (0xc0006e3860) Stream removed, broadcasting: 3\nI0308 17:54:37.581601 2685 log.go:172] (0xc000c0af20) (0xc000530c80) Stream removed, broadcasting: 5\n" Mar 8 17:54:37.584: INFO: stdout: "" Mar 8 17:54:37.584: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:54:37.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7616" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.215 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":211,"skipped":3705,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:54:37.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2650 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 17:54:37.742: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 8 17:54:37.895: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 8 17:54:39.899: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:41.899: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:43.898: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:45.899: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:47.917: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:49.898: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:51.898: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:53.898: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 8 17:54:55.898: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 8 17:54:55.903: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 8 17:54:57.973: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.100 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2650 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:54:57.973: INFO: >>> kubeConfig: /root/.kube/config I0308 17:54:58.007009 7 log.go:172] (0xc002d373f0) (0xc001ae1e00) Create stream I0308 17:54:58.007056 7 log.go:172] (0xc002d373f0) (0xc001ae1e00) Stream added, broadcasting: 1 I0308 17:54:58.009267 7 log.go:172] (0xc002d373f0) Reply frame received for 1 I0308 17:54:58.009322 7 log.go:172] (0xc002d373f0) (0xc001ae1ea0) Create stream I0308 17:54:58.009340 7 log.go:172] (0xc002d373f0) (0xc001ae1ea0) Stream added, broadcasting: 3 I0308 17:54:58.010411 7 log.go:172] (0xc002d373f0) Reply frame received for 3 I0308 17:54:58.010450 7 log.go:172] (0xc002d373f0) (0xc001fd7400) Create stream I0308 17:54:58.010462 7 log.go:172] (0xc002d373f0) (0xc001fd7400) Stream added, broadcasting: 5 I0308 17:54:58.011387 7 log.go:172] (0xc002d373f0) Reply frame received for 5 I0308 17:54:59.066221 7 log.go:172] (0xc002d373f0) Data frame received for 3 I0308 17:54:59.066370 7 log.go:172] (0xc002d373f0) Data frame received for 5 I0308 17:54:59.066404 7 log.go:172] (0xc001fd7400) (5) Data frame handling I0308 17:54:59.066445 7 log.go:172] (0xc001ae1ea0) (3) Data frame handling I0308 17:54:59.066493 7 log.go:172] (0xc001ae1ea0) (3) Data frame sent I0308 17:54:59.066507 7 log.go:172] (0xc002d373f0) Data frame received for 3 I0308 17:54:59.066517 7 log.go:172] (0xc001ae1ea0) (3) Data frame handling I0308 17:54:59.068251 7 log.go:172] (0xc002d373f0) Data frame received for 1 I0308 17:54:59.068273 7 log.go:172] (0xc001ae1e00) (1) Data frame handling I0308 17:54:59.068285 7 log.go:172] (0xc001ae1e00) (1) Data frame sent I0308 17:54:59.068302 7 log.go:172] (0xc002d373f0) (0xc001ae1e00) Stream removed, broadcasting: 1 I0308 17:54:59.068342 7 log.go:172] (0xc002d373f0) Go away received I0308 17:54:59.068439 7 log.go:172] (0xc002d373f0) (0xc001ae1e00) Stream removed, broadcasting: 1 I0308 17:54:59.068455 7 log.go:172] (0xc002d373f0) (0xc001ae1ea0) Stream removed, broadcasting: 3 I0308 17:54:59.068465 7 log.go:172] (0xc002d373f0) (0xc001fd7400) Stream removed, broadcasting: 5 Mar 8 17:54:59.068: INFO: Found all expected endpoints: [netserver-0] Mar 8 17:54:59.072: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.245 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2650 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:54:59.072: INFO: >>> kubeConfig: /root/.kube/config I0308 17:54:59.098019 7 log.go:172] (0xc002d37a20) (0xc000b652c0) Create stream I0308 17:54:59.098056 7 log.go:172] (0xc002d37a20) (0xc000b652c0) Stream added, broadcasting: 1 I0308 17:54:59.100217 7 log.go:172] (0xc002d37a20) Reply frame received for 1 I0308 17:54:59.100240 7 log.go:172] (0xc002d37a20) (0xc0016d9cc0) Create stream I0308 17:54:59.100253 7 log.go:172] (0xc002d37a20) (0xc0016d9cc0) Stream added, broadcasting: 3 I0308 17:54:59.101004 7 log.go:172] (0xc002d37a20) Reply frame received for 3 I0308 17:54:59.101042 7 log.go:172] (0xc002d37a20) (0xc0016d9e00) Create stream I0308 17:54:59.101052 7 log.go:172] (0xc002d37a20) (0xc0016d9e00) Stream added, broadcasting: 5 I0308 17:54:59.101719 7 log.go:172] (0xc002d37a20) Reply frame received for 5 I0308 17:55:00.154800 7 log.go:172] (0xc002d37a20) Data frame received for 3 I0308 17:55:00.154836 7 log.go:172] (0xc0016d9cc0) (3) Data frame handling I0308 17:55:00.154860 7 log.go:172] (0xc0016d9cc0) (3) Data frame sent I0308 17:55:00.154870 7 log.go:172] (0xc002d37a20) Data frame received for 3 I0308 17:55:00.154913 7 log.go:172] (0xc0016d9cc0) (3) Data frame handling I0308 17:55:00.155024 7 log.go:172] (0xc002d37a20) Data frame received for 5 I0308 17:55:00.155052 7 log.go:172] (0xc0016d9e00) (5) Data frame handling I0308 17:55:00.156885 7 log.go:172] (0xc002d37a20) Data frame received for 1 I0308 17:55:00.156905 7 log.go:172] (0xc000b652c0) (1) Data frame handling I0308 17:55:00.156914 7 log.go:172] (0xc000b652c0) (1) Data frame sent I0308 17:55:00.156944 7 log.go:172] (0xc002d37a20) (0xc000b652c0) Stream removed, broadcasting: 1 I0308 17:55:00.157058 7 log.go:172] (0xc002d37a20) (0xc000b652c0) Stream removed, broadcasting: 1 I0308 17:55:00.157084 7 log.go:172] (0xc002d37a20) Go away received I0308 17:55:00.157118 7 log.go:172] (0xc002d37a20) (0xc0016d9cc0) Stream removed, broadcasting: 3 I0308 17:55:00.157138 7 log.go:172] (0xc002d37a20) (0xc0016d9e00) Stream removed, broadcasting: 5 Mar 8 17:55:00.157: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:00.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2650" for this suite. • [SLOW TEST:22.475 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3708,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:00.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 17:55:04.424: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 17:55:04.502: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 17:55:06.502: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 17:55:06.506: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 17:55:08.502: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 17:55:08.507: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 17:55:10.502: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 17:55:10.506: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 17:55:12.502: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 17:55:12.510: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:12.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2209" for this suite. • [SLOW TEST:12.352 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3725,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:12.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:12.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-853" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":214,"skipped":3733,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:12.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:55:12.718: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 17:55:14.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2718 create -f -' Mar 8 17:55:16.652: INFO: stderr: "" Mar 8 17:55:16.652: INFO: stdout: "e2e-test-crd-publish-openapi-5897-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 17:55:16.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2718 delete e2e-test-crd-publish-openapi-5897-crds test-cr' Mar 8 17:55:16.776: INFO: stderr: "" Mar 8 17:55:16.776: INFO: stdout: "e2e-test-crd-publish-openapi-5897-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 8 17:55:16.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2718 apply -f -' Mar 8 17:55:17.042: INFO: stderr: "" Mar 8 17:55:17.042: INFO: stdout: "e2e-test-crd-publish-openapi-5897-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 17:55:17.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2718 delete e2e-test-crd-publish-openapi-5897-crds test-cr' Mar 8 17:55:17.139: INFO: stderr: "" Mar 8 17:55:17.139: INFO: stdout: "e2e-test-crd-publish-openapi-5897-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 17:55:17.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5897-crds' Mar 8 17:55:17.394: INFO: stderr: "" Mar 8 17:55:17.394: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5897-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:19.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2718" for this suite. • [SLOW TEST:6.625 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":215,"skipped":3750,"failed":0} SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:19.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Mar 8 17:55:21.391: INFO: Pod pod-hostip-f2b2de60-31c8-405b-bd85-9fdf8175989b has hostIP: 172.17.0.16 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:21.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3389" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:21.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 17:55:21.480: INFO: Waiting up to 5m0s for pod "pod-f052881c-afce-4b97-ba95-4af861984c03" in namespace "emptydir-1370" to be "Succeeded or Failed" Mar 8 17:55:21.505: INFO: Pod "pod-f052881c-afce-4b97-ba95-4af861984c03": Phase="Pending", Reason="", readiness=false. Elapsed: 25.088798ms Mar 8 17:55:23.509: INFO: Pod "pod-f052881c-afce-4b97-ba95-4af861984c03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029127645s STEP: Saw pod success Mar 8 17:55:23.509: INFO: Pod "pod-f052881c-afce-4b97-ba95-4af861984c03" satisfied condition "Succeeded or Failed" Mar 8 17:55:23.512: INFO: Trying to get logs from node latest-worker2 pod pod-f052881c-afce-4b97-ba95-4af861984c03 container test-container: STEP: delete the pod Mar 8 17:55:23.528: INFO: Waiting for pod pod-f052881c-afce-4b97-ba95-4af861984c03 to disappear Mar 8 17:55:23.532: INFO: Pod pod-f052881c-afce-4b97-ba95-4af861984c03 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:23.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1370" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:23.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-7a5ed82a-6f3f-4fd8-89de-d0222936adad STEP: Creating configMap with name cm-test-opt-upd-a586408f-9b51-4b6f-ba11-10d4623d00bc STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7a5ed82a-6f3f-4fd8-89de-d0222936adad STEP: Updating configmap cm-test-opt-upd-a586408f-9b51-4b6f-ba11-10d4623d00bc STEP: Creating configMap with name cm-test-opt-create-365bdcdc-4f41-49bf-a00e-ef93af65614e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:29.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5368" for this suite. • [SLOW TEST:6.198 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3822,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:29.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 8 17:55:29.869: INFO: Created pod &Pod{ObjectMeta:{dns-2295 dns-2295 /api/v1/namespaces/dns-2295/pods/dns-2295 5e5f2bcc-d97f-4ad1-a538-7b461180238f 62381 0 2020-03-08 17:55:29 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cbh55,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cbh55,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cbh55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 17:55:29.929: INFO: The status of Pod dns-2295 is Pending, waiting for it to be Running (with Ready = true) Mar 8 17:55:31.933: INFO: The status of Pod dns-2295 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 8 17:55:31.933: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2295 PodName:dns-2295 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:55:31.933: INFO: >>> kubeConfig: /root/.kube/config I0308 17:55:31.968746 7 log.go:172] (0xc00560e4d0) (0xc000b65680) Create stream I0308 17:55:31.968779 7 log.go:172] (0xc00560e4d0) (0xc000b65680) Stream added, broadcasting: 1 I0308 17:55:31.972458 7 log.go:172] (0xc00560e4d0) Reply frame received for 1 I0308 17:55:31.972494 7 log.go:172] (0xc00560e4d0) (0xc000b65900) Create stream I0308 17:55:31.972511 7 log.go:172] (0xc00560e4d0) (0xc000b65900) Stream added, broadcasting: 3 I0308 17:55:31.975143 7 log.go:172] (0xc00560e4d0) Reply frame received for 3 I0308 17:55:31.975180 7 log.go:172] (0xc00560e4d0) (0xc001fd7360) Create stream I0308 17:55:31.975198 7 log.go:172] (0xc00560e4d0) (0xc001fd7360) Stream added, broadcasting: 5 I0308 17:55:31.976302 7 log.go:172] (0xc00560e4d0) Reply frame received for 5 I0308 17:55:32.042546 7 log.go:172] (0xc00560e4d0) Data frame received for 3 I0308 17:55:32.042575 7 log.go:172] (0xc000b65900) (3) Data frame handling I0308 17:55:32.042591 7 log.go:172] (0xc000b65900) (3) Data frame sent I0308 17:55:32.043217 7 log.go:172] (0xc00560e4d0) Data frame received for 3 I0308 17:55:32.043276 7 log.go:172] (0xc000b65900) (3) Data frame handling I0308 17:55:32.043981 7 log.go:172] (0xc00560e4d0) Data frame received for 5 I0308 17:55:32.044028 7 log.go:172] (0xc001fd7360) (5) Data frame handling I0308 17:55:32.045826 7 log.go:172] (0xc00560e4d0) Data frame received for 1 I0308 17:55:32.045852 7 log.go:172] (0xc000b65680) (1) Data frame handling I0308 17:55:32.045874 7 log.go:172] (0xc000b65680) (1) Data frame sent I0308 17:55:32.045890 7 log.go:172] (0xc00560e4d0) (0xc000b65680) Stream removed, broadcasting: 1 I0308 17:55:32.045906 7 log.go:172] (0xc00560e4d0) Go away received I0308 17:55:32.046401 7 log.go:172] (0xc00560e4d0) (0xc000b65680) Stream removed, broadcasting: 1 I0308 17:55:32.046427 7 log.go:172] (0xc00560e4d0) (0xc000b65900) Stream removed, broadcasting: 3 I0308 17:55:32.046444 7 log.go:172] (0xc00560e4d0) (0xc001fd7360) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 8 17:55:32.051: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2295 PodName:dns-2295 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 17:55:32.051: INFO: >>> kubeConfig: /root/.kube/config I0308 17:55:32.080856 7 log.go:172] (0xc002cd60b0) (0xc000365720) Create stream I0308 17:55:32.080883 7 log.go:172] (0xc002cd60b0) (0xc000365720) Stream added, broadcasting: 1 I0308 17:55:32.084848 7 log.go:172] (0xc002cd60b0) Reply frame received for 1 I0308 17:55:32.084880 7 log.go:172] (0xc002cd60b0) (0xc000c15720) Create stream I0308 17:55:32.084891 7 log.go:172] (0xc002cd60b0) (0xc000c15720) Stream added, broadcasting: 3 I0308 17:55:32.087018 7 log.go:172] (0xc002cd60b0) Reply frame received for 3 I0308 17:55:32.087056 7 log.go:172] (0xc002cd60b0) (0xc0003659a0) Create stream I0308 17:55:32.087067 7 log.go:172] (0xc002cd60b0) (0xc0003659a0) Stream added, broadcasting: 5 I0308 17:55:32.088848 7 log.go:172] (0xc002cd60b0) Reply frame received for 5 I0308 17:55:32.143438 7 log.go:172] (0xc002cd60b0) Data frame received for 3 I0308 17:55:32.143460 7 log.go:172] (0xc000c15720) (3) Data frame handling I0308 17:55:32.143475 7 log.go:172] (0xc000c15720) (3) Data frame sent I0308 17:55:32.143871 7 log.go:172] (0xc002cd60b0) Data frame received for 3 I0308 17:55:32.143891 7 log.go:172] (0xc000c15720) (3) Data frame handling I0308 17:55:32.143969 7 log.go:172] (0xc002cd60b0) Data frame received for 5 I0308 17:55:32.143983 7 log.go:172] (0xc0003659a0) (5) Data frame handling I0308 17:55:32.145412 7 log.go:172] (0xc002cd60b0) Data frame received for 1 I0308 17:55:32.145434 7 log.go:172] (0xc000365720) (1) Data frame handling I0308 17:55:32.145449 7 log.go:172] (0xc000365720) (1) Data frame sent I0308 17:55:32.145462 7 log.go:172] (0xc002cd60b0) (0xc000365720) Stream removed, broadcasting: 1 I0308 17:55:32.145483 7 log.go:172] (0xc002cd60b0) Go away received I0308 17:55:32.145572 7 log.go:172] (0xc002cd60b0) (0xc000365720) Stream removed, broadcasting: 1 I0308 17:55:32.145592 7 log.go:172] (0xc002cd60b0) (0xc000c15720) Stream removed, broadcasting: 3 I0308 17:55:32.145605 7 log.go:172] (0xc002cd60b0) (0xc0003659a0) Stream removed, broadcasting: 5 Mar 8 17:55:32.145: INFO: Deleting pod dns-2295... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:32.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2295" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":219,"skipped":3830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:32.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 17:55:36.310: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 17:55:36.329: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 17:55:38.329: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 17:55:38.335: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 17:55:40.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 17:55:40.332: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:40.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9689" for this suite. • [SLOW TEST:8.144 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3853,"failed":0} SS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:40.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:40.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7343" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":221,"skipped":3855,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:40.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:42.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6673" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3856,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:42.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 8 17:55:42.552: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2746 /api/v1/namespaces/watch-2746/configmaps/e2e-watch-test-resource-version d00726e8-0410-4fad-95ab-570eddf36977 62512 0 2020-03-08 17:55:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 17:55:42.552: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2746 /api/v1/namespaces/watch-2746/configmaps/e2e-watch-test-resource-version d00726e8-0410-4fad-95ab-570eddf36977 62514 0 2020-03-08 17:55:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:42.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2746" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":223,"skipped":3872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:42.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3290.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3290.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3290.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3290.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:55:46.658: INFO: DNS probes using dns-3290/dns-test-48e2958e-576e-438e-b499-770ed7d8d449 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:46.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3290" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":224,"skipped":3895,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:46.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-70a92a1f-6661-4042-8bd4-de8e6501062a STEP: Creating configMap with name cm-test-opt-upd-d78596ff-9f18-4192-892b-50b2587edae7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-70a92a1f-6661-4042-8bd4-de8e6501062a STEP: Updating configmap cm-test-opt-upd-d78596ff-9f18-4192-892b-50b2587edae7 STEP: Creating configMap with name cm-test-opt-create-5860f232-e8cf-49b4-b673-f5c4a89030c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:52.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4180" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:52.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Mar 8 17:55:53.068: INFO: Waiting up to 5m0s for pod "var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80" in namespace "var-expansion-3584" to be "Succeeded or Failed" Mar 8 17:55:53.078: INFO: Pod "var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80": Phase="Pending", Reason="", readiness=false. Elapsed: 9.703215ms Mar 8 17:55:55.081: INFO: Pod "var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012782361s Mar 8 17:55:57.085: INFO: Pod "var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016934393s STEP: Saw pod success Mar 8 17:55:57.085: INFO: Pod "var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80" satisfied condition "Succeeded or Failed" Mar 8 17:55:57.089: INFO: Trying to get logs from node latest-worker2 pod var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80 container dapi-container: STEP: delete the pod Mar 8 17:55:57.110: INFO: Waiting for pod var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80 to disappear Mar 8 17:55:57.114: INFO: Pod var-expansion-bdc985d3-f7dd-400b-944c-86f80e9cba80 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3584" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3951,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:57.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 8 17:55:57.170: INFO: Waiting up to 5m0s for pod "downward-api-ba33ce5b-cdc1-4ae7-8e3e-6007ce1c3d4d" in namespace "downward-api-700" to be "Succeeded or Failed" Mar 8 17:55:57.206: INFO: Pod "downward-api-ba33ce5b-cdc1-4ae7-8e3e-6007ce1c3d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.865901ms Mar 8 17:55:59.211: INFO: Pod "downward-api-ba33ce5b-cdc1-4ae7-8e3e-6007ce1c3d4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040858506s STEP: Saw pod success Mar 8 17:55:59.211: INFO: Pod "downward-api-ba33ce5b-cdc1-4ae7-8e3e-6007ce1c3d4d" satisfied condition "Succeeded or Failed" Mar 8 17:55:59.213: INFO: Trying to get logs from node latest-worker2 pod downward-api-ba33ce5b-cdc1-4ae7-8e3e-6007ce1c3d4d container dapi-container: STEP: delete the pod Mar 8 17:55:59.229: INFO: Waiting for pod downward-api-ba33ce5b-cdc1-4ae7-8e3e-6007ce1c3d4d to disappear Mar 8 17:55:59.255: INFO: Pod downward-api-ba33ce5b-cdc1-4ae7-8e3e-6007ce1c3d4d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:55:59.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-700" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3966,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:55:59.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 8 17:55:59.296: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 17:55:59.339: INFO: Waiting for terminating namespaces to be deleted... Mar 8 17:55:59.341: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 17:55:59.348: INFO: busybox-scheduling-6e5243c0-f77b-4851-aeec-e8269d794510 from kubelet-test-6673 started at 2020-03-08 17:55:40 +0000 UTC (1 container statuses recorded) Mar 8 17:55:59.348: INFO: Container busybox-scheduling-6e5243c0-f77b-4851-aeec-e8269d794510 ready: true, restart count 0 Mar 8 17:55:59.348: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:55:59.348: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:55:59.348: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:55:59.348: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:55:59.348: INFO: pod-configmaps-75ab0499-908f-4455-91aa-27145b8d40ee from configmap-4180 started at 2020-03-08 17:55:46 +0000 UTC (3 container statuses recorded) Mar 8 17:55:59.348: INFO: Container createcm-volume-test ready: false, restart count 0 Mar 8 17:55:59.348: INFO: Container delcm-volume-test ready: false, restart count 0 Mar 8 17:55:59.348: INFO: Container updcm-volume-test ready: false, restart count 0 Mar 8 17:55:59.348: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 17:55:59.354: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:55:59.354: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:55:59.354: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:55:59.354: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:55:59.354: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 17:55:59.354: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-00f662ff-b838-49d4-929f-4331ace0a9bb 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-00f662ff-b838-49d4-929f-4331ace0a9bb off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-00f662ff-b838-49d4-929f-4331ace0a9bb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:09.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4289" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:10.244 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":228,"skipped":3973,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:09.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-949135c8-78b4-4572-9010-d62126b6b1ee STEP: Creating a pod to test consume configMaps Mar 8 17:56:09.590: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1425758-1a36-42dc-80b5-89da00cf20f0" in namespace "projected-7895" to be "Succeeded or Failed" Mar 8 17:56:09.594: INFO: Pod "pod-projected-configmaps-f1425758-1a36-42dc-80b5-89da00cf20f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787373ms Mar 8 17:56:11.597: INFO: Pod "pod-projected-configmaps-f1425758-1a36-42dc-80b5-89da00cf20f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007322077s STEP: Saw pod success Mar 8 17:56:11.597: INFO: Pod "pod-projected-configmaps-f1425758-1a36-42dc-80b5-89da00cf20f0" satisfied condition "Succeeded or Failed" Mar 8 17:56:11.600: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f1425758-1a36-42dc-80b5-89da00cf20f0 container projected-configmap-volume-test: STEP: delete the pod Mar 8 17:56:11.633: INFO: Waiting for pod pod-projected-configmaps-f1425758-1a36-42dc-80b5-89da00cf20f0 to disappear Mar 8 17:56:11.654: INFO: Pod pod-projected-configmaps-f1425758-1a36-42dc-80b5-89da00cf20f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:11.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7895" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:11.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:56:11.705: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4533 I0308 17:56:11.721201 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4533, replica count: 1 I0308 17:56:12.771691 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 17:56:13.771973 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 17:56:13.882: INFO: Created: latency-svc-xpm4t Mar 8 17:56:13.889: INFO: Got endpoints: latency-svc-xpm4t [16.995673ms] Mar 8 17:56:13.948: INFO: Created: latency-svc-xxmx6 Mar 8 17:56:13.967: INFO: Created: latency-svc-2bst6 Mar 8 17:56:13.968: INFO: Got endpoints: latency-svc-xxmx6 [79.164517ms] Mar 8 17:56:14.001: INFO: Created: latency-svc-j9vq4 Mar 8 17:56:14.001: INFO: Got endpoints: latency-svc-2bst6 [112.411186ms] Mar 8 17:56:14.007: INFO: Got endpoints: latency-svc-j9vq4 [117.85226ms] Mar 8 17:56:14.026: INFO: Created: latency-svc-sxqdq Mar 8 17:56:14.031: INFO: Got endpoints: latency-svc-sxqdq [141.902772ms] Mar 8 17:56:14.069: INFO: Created: latency-svc-ss9pl Mar 8 17:56:14.086: INFO: Got endpoints: latency-svc-ss9pl [196.255372ms] Mar 8 17:56:14.108: INFO: Created: latency-svc-djw68 Mar 8 17:56:14.115: INFO: Got endpoints: latency-svc-djw68 [225.640768ms] Mar 8 17:56:14.150: INFO: Created: latency-svc-d7xgk Mar 8 17:56:14.182: INFO: Got endpoints: latency-svc-d7xgk [292.59967ms] Mar 8 17:56:14.189: INFO: Created: latency-svc-f2k22 Mar 8 17:56:14.221: INFO: Created: latency-svc-p6mhw Mar 8 17:56:14.221: INFO: Got endpoints: latency-svc-f2k22 [331.464095ms] Mar 8 17:56:14.249: INFO: Created: latency-svc-hwbf7 Mar 8 17:56:14.249: INFO: Got endpoints: latency-svc-p6mhw [359.219849ms] Mar 8 17:56:14.307: INFO: Got endpoints: latency-svc-hwbf7 [417.260677ms] Mar 8 17:56:14.309: INFO: Created: latency-svc-lbsld Mar 8 17:56:14.312: INFO: Got endpoints: latency-svc-lbsld [422.857653ms] Mar 8 17:56:14.334: INFO: Created: latency-svc-xzpdv Mar 8 17:56:14.345: INFO: Got endpoints: latency-svc-xzpdv [455.099021ms] Mar 8 17:56:14.360: INFO: Created: latency-svc-lx8vn Mar 8 17:56:14.367: INFO: Got endpoints: latency-svc-lx8vn [476.896688ms] Mar 8 17:56:14.391: INFO: Created: latency-svc-kbg5w Mar 8 17:56:14.403: INFO: Got endpoints: latency-svc-kbg5w [513.080547ms] Mar 8 17:56:14.451: INFO: Created: latency-svc-977ms Mar 8 17:56:14.471: INFO: Created: latency-svc-skk9x Mar 8 17:56:14.471: INFO: Got endpoints: latency-svc-977ms [581.390992ms] Mar 8 17:56:14.481: INFO: Got endpoints: latency-svc-skk9x [512.540611ms] Mar 8 17:56:14.495: INFO: Created: latency-svc-8sv5v Mar 8 17:56:14.498: INFO: Got endpoints: latency-svc-8sv5v [497.140634ms] Mar 8 17:56:14.519: INFO: Created: latency-svc-7pvcx Mar 8 17:56:14.528: INFO: Got endpoints: latency-svc-7pvcx [521.113634ms] Mar 8 17:56:14.570: INFO: Created: latency-svc-bvhnz Mar 8 17:56:14.589: INFO: Created: latency-svc-wzcv7 Mar 8 17:56:14.589: INFO: Got endpoints: latency-svc-bvhnz [557.80217ms] Mar 8 17:56:14.594: INFO: Got endpoints: latency-svc-wzcv7 [508.650692ms] Mar 8 17:56:14.612: INFO: Created: latency-svc-rqlsn Mar 8 17:56:14.618: INFO: Got endpoints: latency-svc-rqlsn [503.096749ms] Mar 8 17:56:14.637: INFO: Created: latency-svc-4tkbj Mar 8 17:56:14.642: INFO: Got endpoints: latency-svc-4tkbj [460.027929ms] Mar 8 17:56:14.660: INFO: Created: latency-svc-56t9v Mar 8 17:56:14.702: INFO: Got endpoints: latency-svc-56t9v [481.154608ms] Mar 8 17:56:14.726: INFO: Created: latency-svc-mhn45 Mar 8 17:56:14.744: INFO: Got endpoints: latency-svc-mhn45 [495.524174ms] Mar 8 17:56:14.799: INFO: Created: latency-svc-rjs5c Mar 8 17:56:14.823: INFO: Got endpoints: latency-svc-rjs5c [515.822389ms] Mar 8 17:56:14.847: INFO: Created: latency-svc-vkb5t Mar 8 17:56:14.852: INFO: Got endpoints: latency-svc-vkb5t [539.660821ms] Mar 8 17:56:14.879: INFO: Created: latency-svc-fcxlw Mar 8 17:56:14.881: INFO: Got endpoints: latency-svc-fcxlw [536.19519ms] Mar 8 17:56:14.903: INFO: Created: latency-svc-75fz6 Mar 8 17:56:14.906: INFO: Got endpoints: latency-svc-75fz6 [539.038485ms] Mar 8 17:56:14.984: INFO: Created: latency-svc-jjdpp Mar 8 17:56:15.005: INFO: Got endpoints: latency-svc-jjdpp [602.656973ms] Mar 8 17:56:15.006: INFO: Created: latency-svc-68nbn Mar 8 17:56:15.013: INFO: Got endpoints: latency-svc-68nbn [542.146415ms] Mar 8 17:56:15.053: INFO: Created: latency-svc-gff6k Mar 8 17:56:15.065: INFO: Got endpoints: latency-svc-gff6k [584.703987ms] Mar 8 17:56:15.128: INFO: Created: latency-svc-n9vm5 Mar 8 17:56:15.153: INFO: Created: latency-svc-v8dsb Mar 8 17:56:15.153: INFO: Got endpoints: latency-svc-n9vm5 [654.618617ms] Mar 8 17:56:15.158: INFO: Got endpoints: latency-svc-v8dsb [629.511246ms] Mar 8 17:56:15.177: INFO: Created: latency-svc-zn2bx Mar 8 17:56:15.182: INFO: Got endpoints: latency-svc-zn2bx [592.795904ms] Mar 8 17:56:15.203: INFO: Created: latency-svc-tl69x Mar 8 17:56:15.212: INFO: Got endpoints: latency-svc-tl69x [617.478258ms] Mar 8 17:56:15.253: INFO: Created: latency-svc-z2pcf Mar 8 17:56:15.315: INFO: Created: latency-svc-5tjs5 Mar 8 17:56:15.315: INFO: Got endpoints: latency-svc-z2pcf [696.362554ms] Mar 8 17:56:15.337: INFO: Got endpoints: latency-svc-5tjs5 [695.144032ms] Mar 8 17:56:15.385: INFO: Created: latency-svc-rb5zg Mar 8 17:56:15.392: INFO: Got endpoints: latency-svc-rb5zg [689.65993ms] Mar 8 17:56:15.408: INFO: Created: latency-svc-6nr56 Mar 8 17:56:15.429: INFO: Got endpoints: latency-svc-6nr56 [684.362759ms] Mar 8 17:56:15.453: INFO: Created: latency-svc-8t2c7 Mar 8 17:56:15.457: INFO: Got endpoints: latency-svc-8t2c7 [633.661184ms] Mar 8 17:56:15.478: INFO: Created: latency-svc-5wv9n Mar 8 17:56:15.481: INFO: Got endpoints: latency-svc-5wv9n [628.633475ms] Mar 8 17:56:15.539: INFO: Created: latency-svc-x8wv6 Mar 8 17:56:15.547: INFO: Got endpoints: latency-svc-x8wv6 [665.648064ms] Mar 8 17:56:15.585: INFO: Created: latency-svc-x79sq Mar 8 17:56:15.611: INFO: Got endpoints: latency-svc-x79sq [705.631533ms] Mar 8 17:56:15.673: INFO: Created: latency-svc-nl928 Mar 8 17:56:15.691: INFO: Got endpoints: latency-svc-nl928 [685.759649ms] Mar 8 17:56:15.729: INFO: Created: latency-svc-7scsg Mar 8 17:56:15.739: INFO: Got endpoints: latency-svc-7scsg [725.27747ms] Mar 8 17:56:15.762: INFO: Created: latency-svc-7mdkb Mar 8 17:56:15.804: INFO: Got endpoints: latency-svc-7mdkb [739.012887ms] Mar 8 17:56:15.838: INFO: Created: latency-svc-mpvpb Mar 8 17:56:15.847: INFO: Got endpoints: latency-svc-mpvpb [693.425927ms] Mar 8 17:56:15.871: INFO: Created: latency-svc-4hnwh Mar 8 17:56:15.876: INFO: Got endpoints: latency-svc-4hnwh [718.706678ms] Mar 8 17:56:15.900: INFO: Created: latency-svc-b8k8h Mar 8 17:56:15.936: INFO: Got endpoints: latency-svc-b8k8h [754.078291ms] Mar 8 17:56:15.958: INFO: Created: latency-svc-kphw8 Mar 8 17:56:15.961: INFO: Got endpoints: latency-svc-kphw8 [748.725741ms] Mar 8 17:56:15.982: INFO: Created: latency-svc-fnfg9 Mar 8 17:56:15.985: INFO: Got endpoints: latency-svc-fnfg9 [669.887342ms] Mar 8 17:56:16.008: INFO: Created: latency-svc-fqdg7 Mar 8 17:56:16.028: INFO: Got endpoints: latency-svc-fqdg7 [690.172849ms] Mar 8 17:56:16.092: INFO: Created: latency-svc-tgkgz Mar 8 17:56:16.121: INFO: Created: latency-svc-hnzmt Mar 8 17:56:16.121: INFO: Got endpoints: latency-svc-tgkgz [729.030212ms] Mar 8 17:56:16.154: INFO: Got endpoints: latency-svc-hnzmt [724.974004ms] Mar 8 17:56:16.223: INFO: Created: latency-svc-r2scz Mar 8 17:56:16.229: INFO: Got endpoints: latency-svc-r2scz [772.822687ms] Mar 8 17:56:16.272: INFO: Created: latency-svc-fvmgp Mar 8 17:56:16.278: INFO: Got endpoints: latency-svc-fvmgp [796.887858ms] Mar 8 17:56:16.308: INFO: Created: latency-svc-rzbj6 Mar 8 17:56:16.355: INFO: Got endpoints: latency-svc-rzbj6 [807.783692ms] Mar 8 17:56:16.356: INFO: Created: latency-svc-lfk6q Mar 8 17:56:16.385: INFO: Got endpoints: latency-svc-lfk6q [773.948736ms] Mar 8 17:56:16.406: INFO: Created: latency-svc-wfp8g Mar 8 17:56:16.415: INFO: Got endpoints: latency-svc-wfp8g [724.078856ms] Mar 8 17:56:16.442: INFO: Created: latency-svc-47qtz Mar 8 17:56:16.451: INFO: Got endpoints: latency-svc-47qtz [712.744973ms] Mar 8 17:56:16.516: INFO: Created: latency-svc-fmwvh Mar 8 17:56:16.577: INFO: Got endpoints: latency-svc-fmwvh [773.034136ms] Mar 8 17:56:16.609: INFO: Created: latency-svc-mjhdh Mar 8 17:56:16.614: INFO: Got endpoints: latency-svc-mjhdh [766.935891ms] Mar 8 17:56:16.684: INFO: Created: latency-svc-bkwbk Mar 8 17:56:16.730: INFO: Got endpoints: latency-svc-bkwbk [853.583325ms] Mar 8 17:56:16.731: INFO: Created: latency-svc-hrfgl Mar 8 17:56:16.760: INFO: Got endpoints: latency-svc-hrfgl [824.141657ms] Mar 8 17:56:16.822: INFO: Created: latency-svc-8clkf Mar 8 17:56:16.848: INFO: Created: latency-svc-dqstz Mar 8 17:56:16.849: INFO: Got endpoints: latency-svc-8clkf [888.009567ms] Mar 8 17:56:16.866: INFO: Got endpoints: latency-svc-dqstz [881.043702ms] Mar 8 17:56:16.908: INFO: Created: latency-svc-s4222 Mar 8 17:56:16.948: INFO: Got endpoints: latency-svc-s4222 [919.865115ms] Mar 8 17:56:16.974: INFO: Created: latency-svc-lckn8 Mar 8 17:56:16.985: INFO: Got endpoints: latency-svc-lckn8 [863.446868ms] Mar 8 17:56:17.024: INFO: Created: latency-svc-7rfbh Mar 8 17:56:17.028: INFO: Got endpoints: latency-svc-7rfbh [873.83781ms] Mar 8 17:56:17.091: INFO: Created: latency-svc-lnhlw Mar 8 17:56:17.112: INFO: Created: latency-svc-qcj79 Mar 8 17:56:17.112: INFO: Got endpoints: latency-svc-lnhlw [882.204063ms] Mar 8 17:56:17.123: INFO: Got endpoints: latency-svc-qcj79 [844.91498ms] Mar 8 17:56:17.148: INFO: Created: latency-svc-jb4g9 Mar 8 17:56:17.153: INFO: Got endpoints: latency-svc-jb4g9 [797.479708ms] Mar 8 17:56:17.178: INFO: Created: latency-svc-rwbqk Mar 8 17:56:17.182: INFO: Got endpoints: latency-svc-rwbqk [796.805659ms] Mar 8 17:56:17.240: INFO: Created: latency-svc-zws9c Mar 8 17:56:17.248: INFO: Got endpoints: latency-svc-zws9c [833.203751ms] Mar 8 17:56:17.276: INFO: Created: latency-svc-rmdsr Mar 8 17:56:17.290: INFO: Got endpoints: latency-svc-rmdsr [838.776819ms] Mar 8 17:56:17.361: INFO: Created: latency-svc-vdwhv Mar 8 17:56:17.382: INFO: Got endpoints: latency-svc-vdwhv [804.460281ms] Mar 8 17:56:17.382: INFO: Created: latency-svc-lm7t4 Mar 8 17:56:17.403: INFO: Got endpoints: latency-svc-lm7t4 [788.966192ms] Mar 8 17:56:17.426: INFO: Created: latency-svc-7zlfn Mar 8 17:56:17.442: INFO: Got endpoints: latency-svc-7zlfn [711.561595ms] Mar 8 17:56:17.506: INFO: Created: latency-svc-x7th8 Mar 8 17:56:17.512: INFO: Got endpoints: latency-svc-x7th8 [752.109879ms] Mar 8 17:56:17.533: INFO: Created: latency-svc-r7pr8 Mar 8 17:56:17.536: INFO: Got endpoints: latency-svc-r7pr8 [687.378463ms] Mar 8 17:56:17.583: INFO: Created: latency-svc-pwxt5 Mar 8 17:56:17.596: INFO: Got endpoints: latency-svc-pwxt5 [730.083779ms] Mar 8 17:56:17.657: INFO: Created: latency-svc-g956x Mar 8 17:56:17.673: INFO: Created: latency-svc-6cs69 Mar 8 17:56:17.673: INFO: Got endpoints: latency-svc-g956x [725.361536ms] Mar 8 17:56:17.706: INFO: Created: latency-svc-db7kw Mar 8 17:56:17.708: INFO: Got endpoints: latency-svc-6cs69 [723.472459ms] Mar 8 17:56:17.737: INFO: Got endpoints: latency-svc-db7kw [708.940233ms] Mar 8 17:56:17.780: INFO: Created: latency-svc-jxcvw Mar 8 17:56:17.787: INFO: Got endpoints: latency-svc-jxcvw [675.685474ms] Mar 8 17:56:17.829: INFO: Created: latency-svc-9z65f Mar 8 17:56:17.835: INFO: Got endpoints: latency-svc-9z65f [712.404675ms] Mar 8 17:56:17.859: INFO: Created: latency-svc-v5skk Mar 8 17:56:17.878: INFO: Got endpoints: latency-svc-v5skk [724.92369ms] Mar 8 17:56:17.912: INFO: Created: latency-svc-vknhd Mar 8 17:56:17.931: INFO: Got endpoints: latency-svc-vknhd [749.23481ms] Mar 8 17:56:17.933: INFO: Created: latency-svc-d8lsb Mar 8 17:56:17.938: INFO: Got endpoints: latency-svc-d8lsb [689.493173ms] Mar 8 17:56:17.956: INFO: Created: latency-svc-79ljj Mar 8 17:56:17.989: INFO: Got endpoints: latency-svc-79ljj [699.186754ms] Mar 8 17:56:18.056: INFO: Created: latency-svc-tff8r Mar 8 17:56:18.095: INFO: Created: latency-svc-vmlns Mar 8 17:56:18.095: INFO: Got endpoints: latency-svc-tff8r [713.506302ms] Mar 8 17:56:18.146: INFO: Got endpoints: latency-svc-vmlns [743.692774ms] Mar 8 17:56:18.203: INFO: Created: latency-svc-mtlnn Mar 8 17:56:18.207: INFO: Got endpoints: latency-svc-mtlnn [765.592146ms] Mar 8 17:56:18.233: INFO: Created: latency-svc-2n9m2 Mar 8 17:56:18.245: INFO: Got endpoints: latency-svc-2n9m2 [732.456008ms] Mar 8 17:56:18.275: INFO: Created: latency-svc-2l4xf Mar 8 17:56:18.279: INFO: Got endpoints: latency-svc-2l4xf [742.532402ms] Mar 8 17:56:18.325: INFO: Created: latency-svc-dc6zx Mar 8 17:56:18.333: INFO: Got endpoints: latency-svc-dc6zx [736.927502ms] Mar 8 17:56:18.383: INFO: Created: latency-svc-8l47v Mar 8 17:56:18.386: INFO: Got endpoints: latency-svc-8l47v [713.301344ms] Mar 8 17:56:18.463: INFO: Created: latency-svc-2pnpr Mar 8 17:56:18.471: INFO: Got endpoints: latency-svc-2pnpr [762.75871ms] Mar 8 17:56:18.504: INFO: Created: latency-svc-dr6g2 Mar 8 17:56:18.513: INFO: Got endpoints: latency-svc-dr6g2 [775.97415ms] Mar 8 17:56:18.600: INFO: Created: latency-svc-wjpm7 Mar 8 17:56:18.608: INFO: Got endpoints: latency-svc-wjpm7 [820.860867ms] Mar 8 17:56:18.636: INFO: Created: latency-svc-zvd5h Mar 8 17:56:18.664: INFO: Got endpoints: latency-svc-zvd5h [828.621486ms] Mar 8 17:56:18.685: INFO: Created: latency-svc-p4vt6 Mar 8 17:56:18.699: INFO: Got endpoints: latency-svc-p4vt6 [821.285749ms] Mar 8 17:56:18.742: INFO: Created: latency-svc-dwv96 Mar 8 17:56:18.746: INFO: Got endpoints: latency-svc-dwv96 [814.887445ms] Mar 8 17:56:18.765: INFO: Created: latency-svc-rs9sp Mar 8 17:56:18.770: INFO: Got endpoints: latency-svc-rs9sp [832.130473ms] Mar 8 17:56:18.810: INFO: Created: latency-svc-mcm77 Mar 8 17:56:18.824: INFO: Got endpoints: latency-svc-mcm77 [834.662028ms] Mar 8 17:56:18.886: INFO: Created: latency-svc-8l682 Mar 8 17:56:18.891: INFO: Got endpoints: latency-svc-8l682 [795.027968ms] Mar 8 17:56:18.925: INFO: Created: latency-svc-qrdpt Mar 8 17:56:18.932: INFO: Got endpoints: latency-svc-qrdpt [785.986299ms] Mar 8 17:56:18.996: INFO: Created: latency-svc-w5phf Mar 8 17:56:19.025: INFO: Created: latency-svc-h6lv5 Mar 8 17:56:19.025: INFO: Got endpoints: latency-svc-w5phf [817.926842ms] Mar 8 17:56:19.035: INFO: Got endpoints: latency-svc-h6lv5 [790.053231ms] Mar 8 17:56:19.081: INFO: Created: latency-svc-78nj9 Mar 8 17:56:19.139: INFO: Got endpoints: latency-svc-78nj9 [859.923407ms] Mar 8 17:56:19.195: INFO: Created: latency-svc-mg44k Mar 8 17:56:19.201: INFO: Got endpoints: latency-svc-mg44k [868.411605ms] Mar 8 17:56:19.248: INFO: Created: latency-svc-brksd Mar 8 17:56:19.274: INFO: Got endpoints: latency-svc-brksd [887.59039ms] Mar 8 17:56:19.277: INFO: Created: latency-svc-7sj6v Mar 8 17:56:19.294: INFO: Got endpoints: latency-svc-7sj6v [822.902261ms] Mar 8 17:56:19.324: INFO: Created: latency-svc-drg7w Mar 8 17:56:19.334: INFO: Got endpoints: latency-svc-drg7w [821.101001ms] Mar 8 17:56:19.403: INFO: Created: latency-svc-mhvbk Mar 8 17:56:19.422: INFO: Got endpoints: latency-svc-mhvbk [814.013005ms] Mar 8 17:56:19.423: INFO: Created: latency-svc-zp8cn Mar 8 17:56:19.441: INFO: Got endpoints: latency-svc-zp8cn [777.00162ms] Mar 8 17:56:19.459: INFO: Created: latency-svc-kt9cg Mar 8 17:56:19.465: INFO: Got endpoints: latency-svc-kt9cg [766.16098ms] Mar 8 17:56:19.483: INFO: Created: latency-svc-2fx4k Mar 8 17:56:19.502: INFO: Got endpoints: latency-svc-2fx4k [755.202862ms] Mar 8 17:56:19.547: INFO: Created: latency-svc-2nr4s Mar 8 17:56:19.567: INFO: Got endpoints: latency-svc-2nr4s [796.690797ms] Mar 8 17:56:19.612: INFO: Created: latency-svc-mxnzw Mar 8 17:56:19.621: INFO: Got endpoints: latency-svc-mxnzw [796.968487ms] Mar 8 17:56:19.645: INFO: Created: latency-svc-tzggt Mar 8 17:56:19.678: INFO: Got endpoints: latency-svc-tzggt [787.635255ms] Mar 8 17:56:19.715: INFO: Created: latency-svc-pd4lv Mar 8 17:56:19.723: INFO: Got endpoints: latency-svc-pd4lv [790.47559ms] Mar 8 17:56:19.751: INFO: Created: latency-svc-4srvt Mar 8 17:56:19.759: INFO: Got endpoints: latency-svc-4srvt [733.583558ms] Mar 8 17:56:19.800: INFO: Created: latency-svc-crp2g Mar 8 17:56:19.825: INFO: Got endpoints: latency-svc-crp2g [789.817005ms] Mar 8 17:56:19.826: INFO: Created: latency-svc-b8l89 Mar 8 17:56:19.855: INFO: Got endpoints: latency-svc-b8l89 [716.263671ms] Mar 8 17:56:19.873: INFO: Created: latency-svc-88w88 Mar 8 17:56:19.879: INFO: Got endpoints: latency-svc-88w88 [677.618628ms] Mar 8 17:56:19.919: INFO: Created: latency-svc-kdvn9 Mar 8 17:56:19.963: INFO: Got endpoints: latency-svc-kdvn9 [689.40265ms] Mar 8 17:56:19.963: INFO: Created: latency-svc-b8qvr Mar 8 17:56:19.974: INFO: Got endpoints: latency-svc-b8qvr [680.566171ms] Mar 8 17:56:19.993: INFO: Created: latency-svc-p2qsh Mar 8 17:56:19.999: INFO: Got endpoints: latency-svc-p2qsh [664.968917ms] Mar 8 17:56:20.049: INFO: Created: latency-svc-vnmv4 Mar 8 17:56:20.052: INFO: Got endpoints: latency-svc-vnmv4 [629.637989ms] Mar 8 17:56:20.079: INFO: Created: latency-svc-hdkzq Mar 8 17:56:20.080: INFO: Got endpoints: latency-svc-hdkzq [638.943411ms] Mar 8 17:56:20.105: INFO: Created: latency-svc-stl5x Mar 8 17:56:20.126: INFO: Created: latency-svc-c9hh4 Mar 8 17:56:20.127: INFO: Got endpoints: latency-svc-stl5x [661.386225ms] Mar 8 17:56:20.130: INFO: Got endpoints: latency-svc-c9hh4 [628.73576ms] Mar 8 17:56:20.194: INFO: Created: latency-svc-c2pdt Mar 8 17:56:20.197: INFO: Got endpoints: latency-svc-c2pdt [630.350264ms] Mar 8 17:56:20.228: INFO: Created: latency-svc-mmmmn Mar 8 17:56:20.239: INFO: Got endpoints: latency-svc-mmmmn [617.893919ms] Mar 8 17:56:20.279: INFO: Created: latency-svc-4qd85 Mar 8 17:56:20.292: INFO: Got endpoints: latency-svc-4qd85 [614.101878ms] Mar 8 17:56:20.337: INFO: Created: latency-svc-zzqpk Mar 8 17:56:20.340: INFO: Got endpoints: latency-svc-zzqpk [617.150386ms] Mar 8 17:56:20.372: INFO: Created: latency-svc-z9ng2 Mar 8 17:56:20.376: INFO: Got endpoints: latency-svc-z9ng2 [617.012419ms] Mar 8 17:56:20.395: INFO: Created: latency-svc-kb7n5 Mar 8 17:56:20.401: INFO: Got endpoints: latency-svc-kb7n5 [575.755692ms] Mar 8 17:56:20.430: INFO: Created: latency-svc-vbpwk Mar 8 17:56:20.436: INFO: Got endpoints: latency-svc-vbpwk [580.770059ms] Mar 8 17:56:20.468: INFO: Created: latency-svc-jkltk Mar 8 17:56:20.483: INFO: Got endpoints: latency-svc-jkltk [604.018388ms] Mar 8 17:56:20.503: INFO: Created: latency-svc-hr7l9 Mar 8 17:56:20.507: INFO: Got endpoints: latency-svc-hr7l9 [544.027528ms] Mar 8 17:56:20.527: INFO: Created: latency-svc-64p6w Mar 8 17:56:20.532: INFO: Got endpoints: latency-svc-64p6w [557.262095ms] Mar 8 17:56:20.552: INFO: Created: latency-svc-skt5r Mar 8 17:56:20.555: INFO: Got endpoints: latency-svc-skt5r [556.619779ms] Mar 8 17:56:20.609: INFO: Created: latency-svc-hkvz6 Mar 8 17:56:20.616: INFO: Got endpoints: latency-svc-hkvz6 [564.311569ms] Mar 8 17:56:20.635: INFO: Created: latency-svc-h8pfc Mar 8 17:56:20.646: INFO: Got endpoints: latency-svc-h8pfc [565.772463ms] Mar 8 17:56:20.750: INFO: Created: latency-svc-bhb95 Mar 8 17:56:20.790: INFO: Got endpoints: latency-svc-bhb95 [663.209235ms] Mar 8 17:56:20.790: INFO: Created: latency-svc-k5kfj Mar 8 17:56:20.808: INFO: Got endpoints: latency-svc-k5kfj [676.999872ms] Mar 8 17:56:20.876: INFO: Created: latency-svc-xscxj Mar 8 17:56:20.916: INFO: Created: latency-svc-m57sm Mar 8 17:56:20.916: INFO: Got endpoints: latency-svc-xscxj [718.667827ms] Mar 8 17:56:20.939: INFO: Got endpoints: latency-svc-m57sm [699.837929ms] Mar 8 17:56:21.002: INFO: Created: latency-svc-rm4p5 Mar 8 17:56:21.011: INFO: Got endpoints: latency-svc-rm4p5 [718.396194ms] Mar 8 17:56:21.061: INFO: Created: latency-svc-s2vkh Mar 8 17:56:21.071: INFO: Got endpoints: latency-svc-s2vkh [731.042832ms] Mar 8 17:56:21.140: INFO: Created: latency-svc-8rwch Mar 8 17:56:21.163: INFO: Got endpoints: latency-svc-8rwch [787.178176ms] Mar 8 17:56:21.165: INFO: Created: latency-svc-5wdd4 Mar 8 17:56:21.167: INFO: Got endpoints: latency-svc-5wdd4 [765.8977ms] Mar 8 17:56:21.213: INFO: Created: latency-svc-9trbq Mar 8 17:56:21.221: INFO: Got endpoints: latency-svc-9trbq [785.08682ms] Mar 8 17:56:21.277: INFO: Created: latency-svc-htv6n Mar 8 17:56:21.303: INFO: Created: latency-svc-pjhd6 Mar 8 17:56:21.304: INFO: Got endpoints: latency-svc-htv6n [820.955874ms] Mar 8 17:56:21.316: INFO: Got endpoints: latency-svc-pjhd6 [808.888031ms] Mar 8 17:56:21.397: INFO: Created: latency-svc-rtzfv Mar 8 17:56:21.441: INFO: Got endpoints: latency-svc-rtzfv [909.445977ms] Mar 8 17:56:21.442: INFO: Created: latency-svc-bz6gc Mar 8 17:56:21.460: INFO: Got endpoints: latency-svc-bz6gc [904.622409ms] Mar 8 17:56:21.535: INFO: Created: latency-svc-56pqq Mar 8 17:56:21.553: INFO: Got endpoints: latency-svc-56pqq [936.64531ms] Mar 8 17:56:21.554: INFO: Created: latency-svc-fckpt Mar 8 17:56:21.568: INFO: Got endpoints: latency-svc-fckpt [922.432219ms] Mar 8 17:56:21.631: INFO: Created: latency-svc-bx7w9 Mar 8 17:56:21.679: INFO: Got endpoints: latency-svc-bx7w9 [889.282478ms] Mar 8 17:56:21.708: INFO: Created: latency-svc-7zvk7 Mar 8 17:56:21.718: INFO: Got endpoints: latency-svc-7zvk7 [910.278683ms] Mar 8 17:56:21.746: INFO: Created: latency-svc-jr72b Mar 8 17:56:21.754: INFO: Got endpoints: latency-svc-jr72b [838.03066ms] Mar 8 17:56:21.786: INFO: Created: latency-svc-dkkcp Mar 8 17:56:21.817: INFO: Got endpoints: latency-svc-dkkcp [878.384759ms] Mar 8 17:56:21.818: INFO: Created: latency-svc-gss62 Mar 8 17:56:21.847: INFO: Got endpoints: latency-svc-gss62 [836.676907ms] Mar 8 17:56:21.906: INFO: Created: latency-svc-zdnbv Mar 8 17:56:21.934: INFO: Got endpoints: latency-svc-zdnbv [862.897594ms] Mar 8 17:56:21.934: INFO: Created: latency-svc-fqqrq Mar 8 17:56:21.945: INFO: Got endpoints: latency-svc-fqqrq [782.271047ms] Mar 8 17:56:21.970: INFO: Created: latency-svc-gs7dw Mar 8 17:56:21.982: INFO: Got endpoints: latency-svc-gs7dw [815.375943ms] Mar 8 17:56:22.032: INFO: Created: latency-svc-nrgp9 Mar 8 17:56:22.058: INFO: Got endpoints: latency-svc-nrgp9 [836.761656ms] Mar 8 17:56:22.059: INFO: Created: latency-svc-fzrhj Mar 8 17:56:22.065: INFO: Got endpoints: latency-svc-fzrhj [761.005014ms] Mar 8 17:56:22.126: INFO: Created: latency-svc-8ftvl Mar 8 17:56:22.171: INFO: Got endpoints: latency-svc-8ftvl [854.635727ms] Mar 8 17:56:22.193: INFO: Created: latency-svc-pzw64 Mar 8 17:56:22.197: INFO: Got endpoints: latency-svc-pzw64 [755.912063ms] Mar 8 17:56:22.238: INFO: Created: latency-svc-2wr8t Mar 8 17:56:22.257: INFO: Got endpoints: latency-svc-2wr8t [797.211318ms] Mar 8 17:56:22.334: INFO: Created: latency-svc-kqfwj Mar 8 17:56:22.341: INFO: Got endpoints: latency-svc-kqfwj [787.981013ms] Mar 8 17:56:22.372: INFO: Created: latency-svc-g6l4h Mar 8 17:56:22.377: INFO: Got endpoints: latency-svc-g6l4h [809.041947ms] Mar 8 17:56:22.403: INFO: Created: latency-svc-p2dq5 Mar 8 17:56:22.413: INFO: Got endpoints: latency-svc-p2dq5 [733.591695ms] Mar 8 17:56:22.445: INFO: Created: latency-svc-ltdqv Mar 8 17:56:22.473: INFO: Got endpoints: latency-svc-ltdqv [754.977267ms] Mar 8 17:56:22.473: INFO: Created: latency-svc-cxz6p Mar 8 17:56:22.503: INFO: Got endpoints: latency-svc-cxz6p [749.107753ms] Mar 8 17:56:22.535: INFO: Created: latency-svc-2ddjb Mar 8 17:56:22.538: INFO: Got endpoints: latency-svc-2ddjb [720.997507ms] Mar 8 17:56:22.596: INFO: Created: latency-svc-8ct5x Mar 8 17:56:22.605: INFO: Got endpoints: latency-svc-8ct5x [757.337016ms] Mar 8 17:56:22.638: INFO: Created: latency-svc-5ptqw Mar 8 17:56:22.641: INFO: Got endpoints: latency-svc-5ptqw [706.49364ms] Mar 8 17:56:22.668: INFO: Created: latency-svc-rk8d6 Mar 8 17:56:22.670: INFO: Got endpoints: latency-svc-rk8d6 [724.941088ms] Mar 8 17:56:22.738: INFO: Created: latency-svc-xqpmg Mar 8 17:56:22.770: INFO: Created: latency-svc-jz5sf Mar 8 17:56:22.770: INFO: Got endpoints: latency-svc-xqpmg [787.955039ms] Mar 8 17:56:22.778: INFO: Got endpoints: latency-svc-jz5sf [720.387559ms] Mar 8 17:56:22.824: INFO: Created: latency-svc-4rsxq Mar 8 17:56:22.833: INFO: Got endpoints: latency-svc-4rsxq [768.052125ms] Mar 8 17:56:22.884: INFO: Created: latency-svc-2nx8m Mar 8 17:56:22.904: INFO: Got endpoints: latency-svc-2nx8m [733.304105ms] Mar 8 17:56:22.924: INFO: Created: latency-svc-mmrqq Mar 8 17:56:22.933: INFO: Got endpoints: latency-svc-mmrqq [735.355803ms] Mar 8 17:56:22.942: INFO: Created: latency-svc-p4cvh Mar 8 17:56:22.946: INFO: Got endpoints: latency-svc-p4cvh [688.532482ms] Mar 8 17:56:22.972: INFO: Created: latency-svc-blkpl Mar 8 17:56:23.007: INFO: Got endpoints: latency-svc-blkpl [666.281542ms] Mar 8 17:56:23.044: INFO: Created: latency-svc-t5mrs Mar 8 17:56:23.055: INFO: Got endpoints: latency-svc-t5mrs [677.475744ms] Mar 8 17:56:23.080: INFO: Created: latency-svc-q4wh6 Mar 8 17:56:23.084: INFO: Got endpoints: latency-svc-q4wh6 [671.418791ms] Mar 8 17:56:23.133: INFO: Created: latency-svc-2x4mr Mar 8 17:56:23.154: INFO: Created: latency-svc-fljnx Mar 8 17:56:23.155: INFO: Got endpoints: latency-svc-2x4mr [681.864905ms] Mar 8 17:56:23.162: INFO: Got endpoints: latency-svc-fljnx [658.673756ms] Mar 8 17:56:23.184: INFO: Created: latency-svc-mfcwt Mar 8 17:56:23.187: INFO: Got endpoints: latency-svc-mfcwt [648.727721ms] Mar 8 17:56:23.209: INFO: Created: latency-svc-qsmmv Mar 8 17:56:23.221: INFO: Got endpoints: latency-svc-qsmmv [616.276162ms] Mar 8 17:56:23.253: INFO: Created: latency-svc-ld7th Mar 8 17:56:23.273: INFO: Got endpoints: latency-svc-ld7th [632.096454ms] Mar 8 17:56:23.273: INFO: Created: latency-svc-mxb47 Mar 8 17:56:23.276: INFO: Got endpoints: latency-svc-mxb47 [605.437133ms] Mar 8 17:56:23.302: INFO: Created: latency-svc-w9vrm Mar 8 17:56:23.316: INFO: Got endpoints: latency-svc-w9vrm [546.118406ms] Mar 8 17:56:23.335: INFO: Created: latency-svc-8x26s Mar 8 17:56:23.341: INFO: Got endpoints: latency-svc-8x26s [563.14372ms] Mar 8 17:56:23.341: INFO: Latencies: [79.164517ms 112.411186ms 117.85226ms 141.902772ms 196.255372ms 225.640768ms 292.59967ms 331.464095ms 359.219849ms 417.260677ms 422.857653ms 455.099021ms 460.027929ms 476.896688ms 481.154608ms 495.524174ms 497.140634ms 503.096749ms 508.650692ms 512.540611ms 513.080547ms 515.822389ms 521.113634ms 536.19519ms 539.038485ms 539.660821ms 542.146415ms 544.027528ms 546.118406ms 556.619779ms 557.262095ms 557.80217ms 563.14372ms 564.311569ms 565.772463ms 575.755692ms 580.770059ms 581.390992ms 584.703987ms 592.795904ms 602.656973ms 604.018388ms 605.437133ms 614.101878ms 616.276162ms 617.012419ms 617.150386ms 617.478258ms 617.893919ms 628.633475ms 628.73576ms 629.511246ms 629.637989ms 630.350264ms 632.096454ms 633.661184ms 638.943411ms 648.727721ms 654.618617ms 658.673756ms 661.386225ms 663.209235ms 664.968917ms 665.648064ms 666.281542ms 669.887342ms 671.418791ms 675.685474ms 676.999872ms 677.475744ms 677.618628ms 680.566171ms 681.864905ms 684.362759ms 685.759649ms 687.378463ms 688.532482ms 689.40265ms 689.493173ms 689.65993ms 690.172849ms 693.425927ms 695.144032ms 696.362554ms 699.186754ms 699.837929ms 705.631533ms 706.49364ms 708.940233ms 711.561595ms 712.404675ms 712.744973ms 713.301344ms 713.506302ms 716.263671ms 718.396194ms 718.667827ms 718.706678ms 720.387559ms 720.997507ms 723.472459ms 724.078856ms 724.92369ms 724.941088ms 724.974004ms 725.27747ms 725.361536ms 729.030212ms 730.083779ms 731.042832ms 732.456008ms 733.304105ms 733.583558ms 733.591695ms 735.355803ms 736.927502ms 739.012887ms 742.532402ms 743.692774ms 748.725741ms 749.107753ms 749.23481ms 752.109879ms 754.078291ms 754.977267ms 755.202862ms 755.912063ms 757.337016ms 761.005014ms 762.75871ms 765.592146ms 765.8977ms 766.16098ms 766.935891ms 768.052125ms 772.822687ms 773.034136ms 773.948736ms 775.97415ms 777.00162ms 782.271047ms 785.08682ms 785.986299ms 787.178176ms 787.635255ms 787.955039ms 787.981013ms 788.966192ms 789.817005ms 790.053231ms 790.47559ms 795.027968ms 796.690797ms 796.805659ms 796.887858ms 796.968487ms 797.211318ms 797.479708ms 804.460281ms 807.783692ms 808.888031ms 809.041947ms 814.013005ms 814.887445ms 815.375943ms 817.926842ms 820.860867ms 820.955874ms 821.101001ms 821.285749ms 822.902261ms 824.141657ms 828.621486ms 832.130473ms 833.203751ms 834.662028ms 836.676907ms 836.761656ms 838.03066ms 838.776819ms 844.91498ms 853.583325ms 854.635727ms 859.923407ms 862.897594ms 863.446868ms 868.411605ms 873.83781ms 878.384759ms 881.043702ms 882.204063ms 887.59039ms 888.009567ms 889.282478ms 904.622409ms 909.445977ms 910.278683ms 919.865115ms 922.432219ms 936.64531ms] Mar 8 17:56:23.342: INFO: 50 %ile: 723.472459ms Mar 8 17:56:23.342: INFO: 90 %ile: 844.91498ms Mar 8 17:56:23.342: INFO: 99 %ile: 922.432219ms Mar 8 17:56:23.342: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:23.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4533" for this suite. • [SLOW TEST:11.690 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":230,"skipped":4043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:23.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 17:56:23.455: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 8 17:56:28.468: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 17:56:28.469: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 8 17:56:30.654: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5091 /apis/apps/v1/namespaces/deployment-5091/deployments/test-cleanup-deployment 4407d6df-5551-4e2f-bacb-c183d19c1987 63716 1 2020-03-08 17:56:28 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029191e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 17:56:28 +0000 UTC,LastTransitionTime:2020-03-08 17:56:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-577c77b589" has successfully progressed.,LastUpdateTime:2020-03-08 17:56:30 +0000 UTC,LastTransitionTime:2020-03-08 17:56:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 17:56:30.684: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-5091 /apis/apps/v1/namespaces/deployment-5091/replicasets/test-cleanup-deployment-577c77b589 c6963e46-53ca-44e0-8ea1-9d5b47457831 63702 1 2020-03-08 17:56:28 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 4407d6df-5551-4e2f-bacb-c183d19c1987 0xc00275b4a7 0xc00275b4a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00275b518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 17:56:30.695: INFO: Pod "test-cleanup-deployment-577c77b589-hhjvk" is available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-hhjvk test-cleanup-deployment-577c77b589- deployment-5091 /api/v1/namespaces/deployment-5091/pods/test-cleanup-deployment-577c77b589-hhjvk ee5e3483-a05d-4785-904f-411df1ea6a81 63701 0 2020-03-08 17:56:28 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 c6963e46-53ca-44e0-8ea1-9d5b47457831 0xc00275b907 0xc00275b908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t8pbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t8pbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t8pbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:56:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:56:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:56:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 17:56:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.114,StartTime:2020-03-08 17:56:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 17:56:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://1fc6d31061e8abf1125019728693cb55d1871abc2ba1a856c737cf4171b0b077,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:30.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5091" for this suite. • [SLOW TEST:7.356 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":231,"skipped":4098,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:30.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-1066/secret-test-de6e96e3-2089-486e-a493-6f18aa2998b8 STEP: Creating a pod to test consume secrets Mar 8 17:56:30.977: INFO: Waiting up to 5m0s for pod "pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365" in namespace "secrets-1066" to be "Succeeded or Failed" Mar 8 17:56:31.068: INFO: Pod "pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365": Phase="Pending", Reason="", readiness=false. Elapsed: 90.976247ms Mar 8 17:56:33.079: INFO: Pod "pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102284421s Mar 8 17:56:35.097: INFO: Pod "pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120642939s STEP: Saw pod success Mar 8 17:56:35.097: INFO: Pod "pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365" satisfied condition "Succeeded or Failed" Mar 8 17:56:35.130: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365 container env-test: STEP: delete the pod Mar 8 17:56:35.238: INFO: Waiting for pod pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365 to disappear Mar 8 17:56:35.245: INFO: Pod pod-configmaps-8712aa98-3a05-4dde-8398-dbe47af74365 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:35.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1066" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:35.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 17:56:38.626: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:38.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-359" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:38.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 17:56:39.512: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:39.532: INFO: Number of nodes with available pods: 0 Mar 8 17:56:39.532: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:40.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:40.610: INFO: Number of nodes with available pods: 0 Mar 8 17:56:40.611: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:41.551: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:41.558: INFO: Number of nodes with available pods: 2 Mar 8 17:56:41.558: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 8 17:56:41.731: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:41.737: INFO: Number of nodes with available pods: 1 Mar 8 17:56:41.737: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:42.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:42.755: INFO: Number of nodes with available pods: 1 Mar 8 17:56:42.755: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:44.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:44.018: INFO: Number of nodes with available pods: 1 Mar 8 17:56:44.018: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:44.757: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:44.759: INFO: Number of nodes with available pods: 1 Mar 8 17:56:44.759: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:45.741: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:45.745: INFO: Number of nodes with available pods: 1 Mar 8 17:56:45.745: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:46.741: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:46.744: INFO: Number of nodes with available pods: 1 Mar 8 17:56:46.744: INFO: Node latest-worker is running more than one daemon pod Mar 8 17:56:47.741: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 17:56:47.744: INFO: Number of nodes with available pods: 2 Mar 8 17:56:47.744: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2132, will wait for the garbage collector to delete the pods Mar 8 17:56:47.807: INFO: Deleting DaemonSet.extensions daemon-set took: 7.749567ms Mar 8 17:56:47.908: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.648044ms Mar 8 17:56:52.111: INFO: Number of nodes with available pods: 0 Mar 8 17:56:52.111: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 17:56:52.113: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2132/daemonsets","resourceVersion":"64482"},"items":null} Mar 8 17:56:52.115: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2132/pods","resourceVersion":"64482"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:52.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2132" for this suite. • [SLOW TEST:13.385 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":234,"skipped":4170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:52.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:56:53.015: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:56:55.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287013, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287013, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287013, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287012, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:56:58.049: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:56:58.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4552" for this suite. STEP: Destroying namespace "webhook-4552-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.226 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":235,"skipped":4208,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:56:58.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:56:58.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c5ea904-f9d9-4726-a98f-364a6a336b32" in namespace "projected-8187" to be "Succeeded or Failed" Mar 8 17:56:58.430: INFO: Pod "downwardapi-volume-7c5ea904-f9d9-4726-a98f-364a6a336b32": Phase="Pending", Reason="", readiness=false. Elapsed: 33.003625ms Mar 8 17:57:00.434: INFO: Pod "downwardapi-volume-7c5ea904-f9d9-4726-a98f-364a6a336b32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036894335s STEP: Saw pod success Mar 8 17:57:00.434: INFO: Pod "downwardapi-volume-7c5ea904-f9d9-4726-a98f-364a6a336b32" satisfied condition "Succeeded or Failed" Mar 8 17:57:00.437: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7c5ea904-f9d9-4726-a98f-364a6a336b32 container client-container: STEP: delete the pod Mar 8 17:57:00.483: INFO: Waiting for pod downwardapi-volume-7c5ea904-f9d9-4726-a98f-364a6a336b32 to disappear Mar 8 17:57:00.486: INFO: Pod downwardapi-volume-7c5ea904-f9d9-4726-a98f-364a6a336b32 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:00.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8187" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4219,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:00.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-3b9be984-d2b4-4ed3-936c-386319b6a49d STEP: Creating a pod to test consume configMaps Mar 8 17:57:00.606: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a128a8c8-8b3b-45f0-923c-9cbc9a96cbdb" in namespace "projected-3705" to be "Succeeded or Failed" Mar 8 17:57:00.628: INFO: Pod "pod-projected-configmaps-a128a8c8-8b3b-45f0-923c-9cbc9a96cbdb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.915594ms Mar 8 17:57:02.631: INFO: Pod "pod-projected-configmaps-a128a8c8-8b3b-45f0-923c-9cbc9a96cbdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025438354s STEP: Saw pod success Mar 8 17:57:02.631: INFO: Pod "pod-projected-configmaps-a128a8c8-8b3b-45f0-923c-9cbc9a96cbdb" satisfied condition "Succeeded or Failed" Mar 8 17:57:02.635: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a128a8c8-8b3b-45f0-923c-9cbc9a96cbdb container projected-configmap-volume-test: STEP: delete the pod Mar 8 17:57:02.662: INFO: Waiting for pod pod-projected-configmaps-a128a8c8-8b3b-45f0-923c-9cbc9a96cbdb to disappear Mar 8 17:57:02.685: INFO: Pod pod-projected-configmaps-a128a8c8-8b3b-45f0-923c-9cbc9a96cbdb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:02.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3705" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4237,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:02.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Mar 8 17:57:02.740: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:02.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7576" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":238,"skipped":4259,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:02.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 17:57:06.940: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 17:57:06.961: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 17:57:08.961: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 17:57:08.965: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 17:57:10.961: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 17:57:10.966: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:10.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6629" for this suite. • [SLOW TEST:8.151 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4265,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:10.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-0d4d843f-79bc-41b0-bb62-e33fdd74b19f STEP: Creating a pod to test consume secrets Mar 8 17:57:11.040: INFO: Waiting up to 5m0s for pod "pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109" in namespace "secrets-3548" to be "Succeeded or Failed" Mar 8 17:57:11.075: INFO: Pod "pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109": Phase="Pending", Reason="", readiness=false. Elapsed: 35.063663ms Mar 8 17:57:13.078: INFO: Pod "pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038510626s Mar 8 17:57:15.082: INFO: Pod "pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042379731s STEP: Saw pod success Mar 8 17:57:15.082: INFO: Pod "pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109" satisfied condition "Succeeded or Failed" Mar 8 17:57:15.085: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109 container secret-volume-test: STEP: delete the pod Mar 8 17:57:15.111: INFO: Waiting for pod pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109 to disappear Mar 8 17:57:15.115: INFO: Pod pod-secrets-745aeeac-4a0d-467c-9957-6252ce0c0109 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:15.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3548" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:15.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:15.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6402" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":241,"skipped":4322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:15.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 8 17:57:17.863: INFO: Successfully updated pod "annotationupdate19238c58-eda9-4ed4-96fa-6ae8c05c4978" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:19.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6544" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4349,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:19.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9328.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9328.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:57:24.018: INFO: DNS probes using dns-test-a7b55b05-66fa-482a-9359-655ee2f64e9f succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9328.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9328.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:57:28.157: INFO: File wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:28.160: INFO: File jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:28.160: INFO: Lookups using dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 failed for: [wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local] Mar 8 17:57:33.171: INFO: File wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:33.175: INFO: File jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:33.175: INFO: Lookups using dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 failed for: [wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local] Mar 8 17:57:38.163: INFO: File wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:38.165: INFO: File jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:38.165: INFO: Lookups using dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 failed for: [wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local] Mar 8 17:57:43.164: INFO: File wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:43.169: INFO: File jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:43.169: INFO: Lookups using dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 failed for: [wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local] Mar 8 17:57:48.165: INFO: File wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:48.168: INFO: File jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local from pod dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 17:57:48.168: INFO: Lookups using dns-9328/dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 failed for: [wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local] Mar 8 17:57:53.169: INFO: DNS probes using dns-test-1c3ff4f4-735a-4c36-9b99-b1bd8fc2e032 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9328.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9328.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9328.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9328.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 17:57:57.347: INFO: DNS probes using dns-test-d78adc65-c872-4cf4-a5c9-06b71e433869 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:57:57.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9328" for this suite. • [SLOW TEST:37.519 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":243,"skipped":4353,"failed":0} S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:57:57.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:00.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1658" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":244,"skipped":4354,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:00.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:17.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1580" for this suite. • [SLOW TEST:17.236 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":245,"skipped":4363,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:17.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 8 17:58:17.921: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Mar 8 17:58:18.430: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 8 17:58:23.460: INFO: Waited 2.779004826s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:23.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6307" for this suite. • [SLOW TEST:6.153 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":246,"skipped":4369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:24.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Mar 8 17:58:24.053: INFO: Waiting up to 5m0s for pod "var-expansion-7395c062-07bb-403f-a8fa-6e01d605326f" in namespace "var-expansion-2393" to be "Succeeded or Failed" Mar 8 17:58:24.064: INFO: Pod "var-expansion-7395c062-07bb-403f-a8fa-6e01d605326f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.06815ms Mar 8 17:58:26.067: INFO: Pod "var-expansion-7395c062-07bb-403f-a8fa-6e01d605326f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014203068s STEP: Saw pod success Mar 8 17:58:26.067: INFO: Pod "var-expansion-7395c062-07bb-403f-a8fa-6e01d605326f" satisfied condition "Succeeded or Failed" Mar 8 17:58:26.069: INFO: Trying to get logs from node latest-worker pod var-expansion-7395c062-07bb-403f-a8fa-6e01d605326f container dapi-container: STEP: delete the pod Mar 8 17:58:26.090: INFO: Waiting for pod var-expansion-7395c062-07bb-403f-a8fa-6e01d605326f to disappear Mar 8 17:58:26.098: INFO: Pod var-expansion-7395c062-07bb-403f-a8fa-6e01d605326f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:26.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2393" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4395,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:26.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 17:58:26.432: INFO: Waiting up to 5m0s for pod "pod-4980b42f-d112-4e97-b108-326cb683ed4f" in namespace "emptydir-7817" to be "Succeeded or Failed" Mar 8 17:58:26.490: INFO: Pod "pod-4980b42f-d112-4e97-b108-326cb683ed4f": Phase="Pending", Reason="", readiness=false. Elapsed: 57.642704ms Mar 8 17:58:28.493: INFO: Pod "pod-4980b42f-d112-4e97-b108-326cb683ed4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061273526s STEP: Saw pod success Mar 8 17:58:28.493: INFO: Pod "pod-4980b42f-d112-4e97-b108-326cb683ed4f" satisfied condition "Succeeded or Failed" Mar 8 17:58:28.496: INFO: Trying to get logs from node latest-worker pod pod-4980b42f-d112-4e97-b108-326cb683ed4f container test-container: STEP: delete the pod Mar 8 17:58:28.544: INFO: Waiting for pod pod-4980b42f-d112-4e97-b108-326cb683ed4f to disappear Mar 8 17:58:28.549: INFO: Pod pod-4980b42f-d112-4e97-b108-326cb683ed4f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:28.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7817" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4411,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:28.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 17:58:29.363: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 17:58:31.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287109, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287109, loc:(*time.Location)(0x7fda4c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287109, loc:(*time.Location)(0x7fda4c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719287109, loc:(*time.Location)(0x7fda4c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 17:58:34.387: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:34.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1160" for this suite. STEP: Destroying namespace "webhook-1160-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.043 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":249,"skipped":4413,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:34.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:58:34.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23a3ae3b-3bc4-44ae-b83c-e0097c66286a" in namespace "downward-api-3908" to be "Succeeded or Failed" Mar 8 17:58:34.662: INFO: Pod "downwardapi-volume-23a3ae3b-3bc4-44ae-b83c-e0097c66286a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.111289ms Mar 8 17:58:36.667: INFO: Pod "downwardapi-volume-23a3ae3b-3bc4-44ae-b83c-e0097c66286a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026515701s STEP: Saw pod success Mar 8 17:58:36.667: INFO: Pod "downwardapi-volume-23a3ae3b-3bc4-44ae-b83c-e0097c66286a" satisfied condition "Succeeded or Failed" Mar 8 17:58:36.670: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-23a3ae3b-3bc4-44ae-b83c-e0097c66286a container client-container: STEP: delete the pod Mar 8 17:58:36.690: INFO: Waiting for pod downwardapi-volume-23a3ae3b-3bc4-44ae-b83c-e0097c66286a to disappear Mar 8 17:58:36.693: INFO: Pod downwardapi-volume-23a3ae3b-3bc4-44ae-b83c-e0097c66286a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:36.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3908" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4432,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:36.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 8 17:58:36.776: INFO: Waiting up to 5m0s for pod "downwardapi-volume-164977a8-4a90-4eb4-bb9b-358d7ef0bcdf" in namespace "downward-api-3739" to be "Succeeded or Failed" Mar 8 17:58:36.779: INFO: Pod "downwardapi-volume-164977a8-4a90-4eb4-bb9b-358d7ef0bcdf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269267ms Mar 8 17:58:38.784: INFO: Pod "downwardapi-volume-164977a8-4a90-4eb4-bb9b-358d7ef0bcdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007501638s STEP: Saw pod success Mar 8 17:58:38.784: INFO: Pod "downwardapi-volume-164977a8-4a90-4eb4-bb9b-358d7ef0bcdf" satisfied condition "Succeeded or Failed" Mar 8 17:58:38.788: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-164977a8-4a90-4eb4-bb9b-358d7ef0bcdf container client-container: STEP: delete the pod Mar 8 17:58:38.810: INFO: Waiting for pod downwardapi-volume-164977a8-4a90-4eb4-bb9b-358d7ef0bcdf to disappear Mar 8 17:58:38.813: INFO: Pod downwardapi-volume-164977a8-4a90-4eb4-bb9b-358d7ef0bcdf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 17:58:38.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3739" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4441,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 17:58:38.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 8 17:58:38.875: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 17:58:38.908: INFO: Waiting for terminating namespaces to be deleted... Mar 8 17:58:38.911: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 8 17:58:38.916: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:58:38.916: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:58:38.916: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 8 17:58:38.916: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:58:38.916: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 8 17:58:38.922: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:58:38.922: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 17:58:38.922: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 8 17:58:38.922: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 17:58:38.922: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 8 17:58:38.922: INFO: Container coredns ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5d022d79-4727-4e6c-9c51-9e1276d335e3 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5d022d79-4727-4e6c-9c51-9e1276d335e3 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5d022d79-4727-4e6c-9c51-9e1276d335e3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:03:47.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2399" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.364 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":252,"skipped":4448,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:03:47.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 8 18:03:47.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3292' Mar 8 18:03:47.579: INFO: stderr: "" Mar 8 18:03:47.579: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 18:03:47.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3292' Mar 8 18:03:47.678: INFO: stderr: "" Mar 8 18:03:47.678: INFO: stdout: "update-demo-nautilus-wj5xk update-demo-nautilus-wzmss " Mar 8 18:03:47.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wj5xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3292' Mar 8 18:03:47.767: INFO: stderr: "" Mar 8 18:03:47.767: INFO: stdout: "" Mar 8 18:03:47.767: INFO: update-demo-nautilus-wj5xk is created but not running Mar 8 18:03:52.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3292' Mar 8 18:03:52.874: INFO: stderr: "" Mar 8 18:03:52.874: INFO: stdout: "update-demo-nautilus-wj5xk update-demo-nautilus-wzmss " Mar 8 18:03:52.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wj5xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3292' Mar 8 18:03:52.952: INFO: stderr: "" Mar 8 18:03:52.952: INFO: stdout: "true" Mar 8 18:03:52.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wj5xk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3292' Mar 8 18:03:53.022: INFO: stderr: "" Mar 8 18:03:53.022: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 18:03:53.022: INFO: validating pod update-demo-nautilus-wj5xk Mar 8 18:03:53.025: INFO: got data: { "image": "nautilus.jpg" } Mar 8 18:03:53.025: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 18:03:53.025: INFO: update-demo-nautilus-wj5xk is verified up and running Mar 8 18:03:53.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzmss -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3292' Mar 8 18:03:53.094: INFO: stderr: "" Mar 8 18:03:53.094: INFO: stdout: "true" Mar 8 18:03:53.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzmss -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3292' Mar 8 18:03:53.172: INFO: stderr: "" Mar 8 18:03:53.173: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 18:03:53.173: INFO: validating pod update-demo-nautilus-wzmss Mar 8 18:03:53.175: INFO: got data: { "image": "nautilus.jpg" } Mar 8 18:03:53.175: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 18:03:53.175: INFO: update-demo-nautilus-wzmss is verified up and running STEP: using delete to clean up resources Mar 8 18:03:53.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3292' Mar 8 18:03:53.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 18:03:53.241: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 18:03:53.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3292' Mar 8 18:03:53.303: INFO: stderr: "No resources found in kubectl-3292 namespace.\n" Mar 8 18:03:53.303: INFO: stdout: "" Mar 8 18:03:53.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3292 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 18:03:53.367: INFO: stderr: "" Mar 8 18:03:53.367: INFO: stdout: "update-demo-nautilus-wj5xk\nupdate-demo-nautilus-wzmss\n" Mar 8 18:03:53.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3292' Mar 8 18:03:53.946: INFO: stderr: "No resources found in kubectl-3292 namespace.\n" Mar 8 18:03:53.946: INFO: stdout: "" Mar 8 18:03:53.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3292 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 18:03:54.023: INFO: stderr: "" Mar 8 18:03:54.023: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:03:54.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3292" for this suite. • [SLOW TEST:6.841 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":253,"skipped":4450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:03:54.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8512 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-8512 Mar 8 18:03:54.109: INFO: Found 0 stateful pods, waiting for 1 Mar 8 18:04:04.113: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 18:04:04.128: INFO: Deleting all statefulset in ns statefulset-8512 Mar 8 18:04:04.134: INFO: Scaling statefulset ss to 0 Mar 8 18:04:24.187: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 18:04:24.189: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:04:24.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8512" for this suite. • [SLOW TEST:30.181 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":254,"skipped":4479,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:04:24.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 18:04:24.276: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 8 18:04:24.314: INFO: Number of nodes with available pods: 0 Mar 8 18:04:24.314: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 8 18:04:24.388: INFO: Number of nodes with available pods: 0 Mar 8 18:04:24.388: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:25.392: INFO: Number of nodes with available pods: 0 Mar 8 18:04:25.392: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:26.392: INFO: Number of nodes with available pods: 1 Mar 8 18:04:26.392: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 8 18:04:26.424: INFO: Number of nodes with available pods: 1 Mar 8 18:04:26.424: INFO: Number of running nodes: 0, number of available pods: 1 Mar 8 18:04:27.428: INFO: Number of nodes with available pods: 0 Mar 8 18:04:27.428: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 8 18:04:27.440: INFO: Number of nodes with available pods: 0 Mar 8 18:04:27.440: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:28.445: INFO: Number of nodes with available pods: 0 Mar 8 18:04:28.445: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:29.444: INFO: Number of nodes with available pods: 0 Mar 8 18:04:29.444: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:30.444: INFO: Number of nodes with available pods: 0 Mar 8 18:04:30.444: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:31.445: INFO: Number of nodes with available pods: 0 Mar 8 18:04:31.445: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:32.446: INFO: Number of nodes with available pods: 0 Mar 8 18:04:32.446: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:33.445: INFO: Number of nodes with available pods: 0 Mar 8 18:04:33.445: INFO: Node latest-worker2 is running more than one daemon pod Mar 8 18:04:34.445: INFO: Number of nodes with available pods: 1 Mar 8 18:04:34.445: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9734, will wait for the garbage collector to delete the pods Mar 8 18:04:34.510: INFO: Deleting DaemonSet.extensions daemon-set took: 6.251604ms Mar 8 18:04:34.810: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.218367ms Mar 8 18:04:42.120: INFO: Number of nodes with available pods: 0 Mar 8 18:04:42.120: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 18:04:42.122: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9734/daemonsets","resourceVersion":"66769"},"items":null} Mar 8 18:04:42.124: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9734/pods","resourceVersion":"66769"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:04:42.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9734" for this suite. • [SLOW TEST:18.012 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":255,"skipped":4494,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:04:42.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:04:42.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8639" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4500,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:04:42.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:05:42.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4594" for this suite. • [SLOW TEST:60.139 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4508,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:05:42.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 18:05:42.627: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4ec23737-1a96-447b-a8fb-1d01244cb227" in namespace "security-context-test-144" to be "Succeeded or Failed" Mar 8 18:05:42.636: INFO: Pod "busybox-user-65534-4ec23737-1a96-447b-a8fb-1d01244cb227": Phase="Pending", Reason="", readiness=false. Elapsed: 8.807255ms Mar 8 18:05:44.640: INFO: Pod "busybox-user-65534-4ec23737-1a96-447b-a8fb-1d01244cb227": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012543801s Mar 8 18:05:44.640: INFO: Pod "busybox-user-65534-4ec23737-1a96-447b-a8fb-1d01244cb227" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:05:44.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-144" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4509,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:05:44.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-7a6940d0-f387-4da0-861b-6a8560852baf STEP: Creating a pod to test consume secrets Mar 8 18:05:44.768: INFO: Waiting up to 5m0s for pod "pod-secrets-721831ef-5037-45a0-98fd-fe6afe0f938b" in namespace "secrets-4580" to be "Succeeded or Failed" Mar 8 18:05:44.777: INFO: Pod "pod-secrets-721831ef-5037-45a0-98fd-fe6afe0f938b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524438ms Mar 8 18:05:46.780: INFO: Pod "pod-secrets-721831ef-5037-45a0-98fd-fe6afe0f938b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011899747s STEP: Saw pod success Mar 8 18:05:46.780: INFO: Pod "pod-secrets-721831ef-5037-45a0-98fd-fe6afe0f938b" satisfied condition "Succeeded or Failed" Mar 8 18:05:46.782: INFO: Trying to get logs from node latest-worker pod pod-secrets-721831ef-5037-45a0-98fd-fe6afe0f938b container secret-env-test: STEP: delete the pod Mar 8 18:05:46.831: INFO: Waiting for pod pod-secrets-721831ef-5037-45a0-98fd-fe6afe0f938b to disappear Mar 8 18:05:46.842: INFO: Pod pod-secrets-721831ef-5037-45a0-98fd-fe6afe0f938b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:05:46.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4580" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4526,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:05:46.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-811a0572-e7ad-4fb6-a32c-0e502f031993 STEP: Creating secret with name s-test-opt-upd-6d7354a0-6a8e-4fb4-9c11-40752e1afa5c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-811a0572-e7ad-4fb6-a32c-0e502f031993 STEP: Updating secret s-test-opt-upd-6d7354a0-6a8e-4fb4-9c11-40752e1afa5c STEP: Creating secret with name s-test-opt-create-94bf2779-af9f-4df7-abb9-ce027c7555e0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:05:53.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7208" for this suite. • [SLOW TEST:6.257 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4535,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:05:53.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:09.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4160" for this suite. • [SLOW TEST:16.107 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":261,"skipped":4556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:09.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-1114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1114 to expose endpoints map[] Mar 8 18:06:09.322: INFO: Get endpoints failed (6.867336ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 8 18:06:10.326: INFO: successfully validated that service multi-endpoint-test in namespace services-1114 exposes endpoints map[] (1.009997434s elapsed) STEP: Creating pod pod1 in namespace services-1114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1114 to expose endpoints map[pod1:[100]] Mar 8 18:06:12.364: INFO: successfully validated that service multi-endpoint-test in namespace services-1114 exposes endpoints map[pod1:[100]] (2.034261601s elapsed) STEP: Creating pod pod2 in namespace services-1114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1114 to expose endpoints map[pod1:[100] pod2:[101]] Mar 8 18:06:14.488: INFO: successfully validated that service multi-endpoint-test in namespace services-1114 exposes endpoints map[pod1:[100] pod2:[101]] (2.120865741s elapsed) STEP: Deleting pod pod1 in namespace services-1114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1114 to expose endpoints map[pod2:[101]] Mar 8 18:06:15.552: INFO: successfully validated that service multi-endpoint-test in namespace services-1114 exposes endpoints map[pod2:[101]] (1.054638346s elapsed) STEP: Deleting pod pod2 in namespace services-1114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1114 to expose endpoints map[] Mar 8 18:06:16.579: INFO: successfully validated that service multi-endpoint-test in namespace services-1114 exposes endpoints map[] (1.02247315s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:16.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1114" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:7.405 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":262,"skipped":4589,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:16.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Mar 8 18:06:19.228: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5522 pod-service-account-fa621609-8b99-4625-9a07-0cb2eb68c30f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 8 18:06:21.127: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5522 pod-service-account-fa621609-8b99-4625-9a07-0cb2eb68c30f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 8 18:06:21.346: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5522 pod-service-account-fa621609-8b99-4625-9a07-0cb2eb68c30f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:21.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5522" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":263,"skipped":4591,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:21.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 18:06:31.770324 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 18:06:31.770: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:31.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2417" for this suite. • [SLOW TEST:10.256 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":264,"skipped":4592,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:31.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 18:06:31.851: INFO: Waiting up to 5m0s for pod "pod-26905ca0-7560-48d5-b9f8-717ac733f1ad" in namespace "emptydir-3557" to be "Succeeded or Failed" Mar 8 18:06:31.856: INFO: Pod "pod-26905ca0-7560-48d5-b9f8-717ac733f1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.932522ms Mar 8 18:06:33.859: INFO: Pod "pod-26905ca0-7560-48d5-b9f8-717ac733f1ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008303656s STEP: Saw pod success Mar 8 18:06:33.859: INFO: Pod "pod-26905ca0-7560-48d5-b9f8-717ac733f1ad" satisfied condition "Succeeded or Failed" Mar 8 18:06:33.863: INFO: Trying to get logs from node latest-worker pod pod-26905ca0-7560-48d5-b9f8-717ac733f1ad container test-container: STEP: delete the pod Mar 8 18:06:33.913: INFO: Waiting for pod pod-26905ca0-7560-48d5-b9f8-717ac733f1ad to disappear Mar 8 18:06:33.921: INFO: Pod pod-26905ca0-7560-48d5-b9f8-717ac733f1ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:33.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3557" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:33.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 8 18:06:36.063: INFO: &Pod{ObjectMeta:{send-events-02e7a685-5dab-46d0-9a2b-55351acd7f4c events-3906 /api/v1/namespaces/events-3906/pods/send-events-02e7a685-5dab-46d0-9a2b-55351acd7f4c 2841240f-6b7c-489c-b1ea-a3be0a077444 67541 0 2020-03-08 18:06:34 +0000 UTC map[name:foo time:9139819] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zblbc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zblbc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zblbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 18:06:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 18:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 18:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 18:06:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.149,StartTime:2020-03-08 18:06:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 18:06:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://5a5dab3fcf1649771e890a35e20178ab3c95060049e1984a3cc12291ed995fa3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 8 18:06:38.068: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 8 18:06:40.072: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:40.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3906" for this suite. • [SLOW TEST:6.194 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":266,"skipped":4630,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:40.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:55.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6240" for this suite. STEP: Destroying namespace "nsdeletetest-81" for this suite. Mar 8 18:06:55.329: INFO: Namespace nsdeletetest-81 was already deleted STEP: Destroying namespace "nsdeletetest-5969" for this suite. • [SLOW TEST:15.207 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":267,"skipped":4630,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:55.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 18:06:55.395: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a400ff28-9fc7-4c6a-94cb-afe4f0c8f78a" in namespace "security-context-test-9200" to be "Succeeded or Failed" Mar 8 18:06:55.416: INFO: Pod "busybox-readonly-false-a400ff28-9fc7-4c6a-94cb-afe4f0c8f78a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.809648ms Mar 8 18:06:57.420: INFO: Pod "busybox-readonly-false-a400ff28-9fc7-4c6a-94cb-afe4f0c8f78a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024756549s Mar 8 18:06:57.420: INFO: Pod "busybox-readonly-false-a400ff28-9fc7-4c6a-94cb-afe4f0c8f78a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:57.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9200" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:57.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-157d2877-953b-468f-bc5f-6bf4cc074b4d STEP: Creating a pod to test consume secrets Mar 8 18:06:57.505: INFO: Waiting up to 5m0s for pod "pod-secrets-fe6df4e4-00aa-4e0f-84df-a272efed9629" in namespace "secrets-7544" to be "Succeeded or Failed" Mar 8 18:06:57.509: INFO: Pod "pod-secrets-fe6df4e4-00aa-4e0f-84df-a272efed9629": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166827ms Mar 8 18:06:59.514: INFO: Pod "pod-secrets-fe6df4e4-00aa-4e0f-84df-a272efed9629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008490351s STEP: Saw pod success Mar 8 18:06:59.514: INFO: Pod "pod-secrets-fe6df4e4-00aa-4e0f-84df-a272efed9629" satisfied condition "Succeeded or Failed" Mar 8 18:06:59.517: INFO: Trying to get logs from node latest-worker pod pod-secrets-fe6df4e4-00aa-4e0f-84df-a272efed9629 container secret-volume-test: STEP: delete the pod Mar 8 18:06:59.535: INFO: Waiting for pod pod-secrets-fe6df4e4-00aa-4e0f-84df-a272efed9629 to disappear Mar 8 18:06:59.539: INFO: Pod pod-secrets-fe6df4e4-00aa-4e0f-84df-a272efed9629 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:06:59.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7544" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4660,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:06:59.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-8e858f94-b79d-4b77-b02e-3f222c94c3f9 STEP: Creating a pod to test consume configMaps Mar 8 18:06:59.636: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6905f32d-5774-4ce0-8f19-76adad834671" in namespace "projected-7474" to be "Succeeded or Failed" Mar 8 18:06:59.641: INFO: Pod "pod-projected-configmaps-6905f32d-5774-4ce0-8f19-76adad834671": Phase="Pending", Reason="", readiness=false. Elapsed: 5.105916ms Mar 8 18:07:01.645: INFO: Pod "pod-projected-configmaps-6905f32d-5774-4ce0-8f19-76adad834671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009169913s STEP: Saw pod success Mar 8 18:07:01.645: INFO: Pod "pod-projected-configmaps-6905f32d-5774-4ce0-8f19-76adad834671" satisfied condition "Succeeded or Failed" Mar 8 18:07:01.648: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-6905f32d-5774-4ce0-8f19-76adad834671 container projected-configmap-volume-test: STEP: delete the pod Mar 8 18:07:01.671: INFO: Waiting for pod pod-projected-configmaps-6905f32d-5774-4ce0-8f19-76adad834671 to disappear Mar 8 18:07:01.681: INFO: Pod pod-projected-configmaps-6905f32d-5774-4ce0-8f19-76adad834671 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:07:01.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7474" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4666,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:07:01.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-a31791aa-4bfd-4348-bf0b-dac0b2a046d7 STEP: Creating a pod to test consume configMaps Mar 8 18:07:01.803: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4157f790-8e36-4036-9b7c-ad9aa57d8c40" in namespace "projected-7282" to be "Succeeded or Failed" Mar 8 18:07:01.807: INFO: Pod "pod-projected-configmaps-4157f790-8e36-4036-9b7c-ad9aa57d8c40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401407ms Mar 8 18:07:03.811: INFO: Pod "pod-projected-configmaps-4157f790-8e36-4036-9b7c-ad9aa57d8c40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008513375s STEP: Saw pod success Mar 8 18:07:03.811: INFO: Pod "pod-projected-configmaps-4157f790-8e36-4036-9b7c-ad9aa57d8c40" satisfied condition "Succeeded or Failed" Mar 8 18:07:03.814: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-4157f790-8e36-4036-9b7c-ad9aa57d8c40 container projected-configmap-volume-test: STEP: delete the pod Mar 8 18:07:03.833: INFO: Waiting for pod pod-projected-configmaps-4157f790-8e36-4036-9b7c-ad9aa57d8c40 to disappear Mar 8 18:07:03.837: INFO: Pod pod-projected-configmaps-4157f790-8e36-4036-9b7c-ad9aa57d8c40 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:07:03.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7282" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4666,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:07:03.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2563 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2563 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2563 Mar 8 18:07:03.942: INFO: Found 0 stateful pods, waiting for 1 Mar 8 18:07:13.946: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 8 18:07:13.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 18:07:14.195: INFO: stderr: "I0308 18:07:14.085946 3173 log.go:172] (0xc00069a790) (0xc000922000) Create stream\nI0308 18:07:14.085996 3173 log.go:172] (0xc00069a790) (0xc000922000) Stream added, broadcasting: 1\nI0308 18:07:14.088011 3173 log.go:172] (0xc00069a790) Reply frame received for 1\nI0308 18:07:14.088054 3173 log.go:172] (0xc00069a790) (0xc0007f72c0) Create stream\nI0308 18:07:14.088072 3173 log.go:172] (0xc00069a790) (0xc0007f72c0) Stream added, broadcasting: 3\nI0308 18:07:14.088961 3173 log.go:172] (0xc00069a790) Reply frame received for 3\nI0308 18:07:14.088998 3173 log.go:172] (0xc00069a790) (0xc000402000) Create stream\nI0308 18:07:14.089011 3173 log.go:172] (0xc00069a790) (0xc000402000) Stream added, broadcasting: 5\nI0308 18:07:14.089696 3173 log.go:172] (0xc00069a790) Reply frame received for 5\nI0308 18:07:14.153542 3173 log.go:172] (0xc00069a790) Data frame received for 5\nI0308 18:07:14.153564 3173 log.go:172] (0xc000402000) (5) Data frame handling\nI0308 18:07:14.153579 3173 log.go:172] (0xc000402000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 18:07:14.188769 3173 log.go:172] (0xc00069a790) Data frame received for 3\nI0308 18:07:14.188796 3173 log.go:172] (0xc0007f72c0) (3) Data frame handling\nI0308 18:07:14.188812 3173 log.go:172] (0xc0007f72c0) (3) Data frame sent\nI0308 18:07:14.189076 3173 log.go:172] (0xc00069a790) Data frame received for 5\nI0308 18:07:14.189107 3173 log.go:172] (0xc000402000) (5) Data frame handling\nI0308 18:07:14.189132 3173 log.go:172] (0xc00069a790) Data frame received for 3\nI0308 18:07:14.189168 3173 log.go:172] (0xc0007f72c0) (3) Data frame handling\nI0308 18:07:14.190741 3173 log.go:172] (0xc00069a790) Data frame received for 1\nI0308 18:07:14.190756 3173 log.go:172] (0xc000922000) (1) Data frame handling\nI0308 18:07:14.190763 3173 log.go:172] (0xc000922000) (1) Data frame sent\nI0308 18:07:14.190773 3173 log.go:172] (0xc00069a790) (0xc000922000) Stream removed, broadcasting: 1\nI0308 18:07:14.190786 3173 log.go:172] (0xc00069a790) Go away received\nI0308 18:07:14.191112 3173 log.go:172] (0xc00069a790) (0xc000922000) Stream removed, broadcasting: 1\nI0308 18:07:14.191133 3173 log.go:172] (0xc00069a790) (0xc0007f72c0) Stream removed, broadcasting: 3\nI0308 18:07:14.191142 3173 log.go:172] (0xc00069a790) (0xc000402000) Stream removed, broadcasting: 5\n" Mar 8 18:07:14.195: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 18:07:14.195: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 18:07:14.198: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 18:07:24.203: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 18:07:24.203: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 18:07:24.217: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999947s Mar 8 18:07:25.228: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996454484s Mar 8 18:07:26.232: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98497648s Mar 8 18:07:27.236: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.980836028s Mar 8 18:07:28.241: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977103432s Mar 8 18:07:29.245: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972400803s Mar 8 18:07:30.253: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968374972s Mar 8 18:07:31.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.960047176s Mar 8 18:07:32.261: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.95649069s Mar 8 18:07:33.275: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.694001ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2563 Mar 8 18:07:34.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 18:07:34.493: INFO: stderr: "I0308 18:07:34.425041 3193 log.go:172] (0xc000b95810) (0xc000aeaa00) Create stream\nI0308 18:07:34.425102 3193 log.go:172] (0xc000b95810) (0xc000aeaa00) Stream added, broadcasting: 1\nI0308 18:07:34.427261 3193 log.go:172] (0xc000b95810) Reply frame received for 1\nI0308 18:07:34.427297 3193 log.go:172] (0xc000b95810) (0xc000bc8320) Create stream\nI0308 18:07:34.427314 3193 log.go:172] (0xc000b95810) (0xc000bc8320) Stream added, broadcasting: 3\nI0308 18:07:34.429090 3193 log.go:172] (0xc000b95810) Reply frame received for 3\nI0308 18:07:34.429127 3193 log.go:172] (0xc000b95810) (0xc000aea000) Create stream\nI0308 18:07:34.429142 3193 log.go:172] (0xc000b95810) (0xc000aea000) Stream added, broadcasting: 5\nI0308 18:07:34.429872 3193 log.go:172] (0xc000b95810) Reply frame received for 5\nI0308 18:07:34.487329 3193 log.go:172] (0xc000b95810) Data frame received for 3\nI0308 18:07:34.487421 3193 log.go:172] (0xc000bc8320) (3) Data frame handling\nI0308 18:07:34.487462 3193 log.go:172] (0xc000bc8320) (3) Data frame sent\nI0308 18:07:34.487686 3193 log.go:172] (0xc000b95810) Data frame received for 5\nI0308 18:07:34.487715 3193 log.go:172] (0xc000b95810) Data frame received for 3\nI0308 18:07:34.487746 3193 log.go:172] (0xc000bc8320) (3) Data frame handling\nI0308 18:07:34.487766 3193 log.go:172] (0xc000aea000) (5) Data frame handling\nI0308 18:07:34.487787 3193 log.go:172] (0xc000aea000) (5) Data frame sent\nI0308 18:07:34.487793 3193 log.go:172] (0xc000b95810) Data frame received for 5\nI0308 18:07:34.487798 3193 log.go:172] (0xc000aea000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 18:07:34.488569 3193 log.go:172] (0xc000b95810) Data frame received for 1\nI0308 18:07:34.488588 3193 log.go:172] (0xc000aeaa00) (1) Data frame handling\nI0308 18:07:34.488600 3193 log.go:172] (0xc000aeaa00) (1) Data frame sent\nI0308 18:07:34.488615 3193 log.go:172] (0xc000b95810) (0xc000aeaa00) Stream removed, broadcasting: 1\nI0308 18:07:34.488960 3193 log.go:172] (0xc000b95810) (0xc000aeaa00) Stream removed, broadcasting: 1\nI0308 18:07:34.488982 3193 log.go:172] (0xc000b95810) (0xc000bc8320) Stream removed, broadcasting: 3\nI0308 18:07:34.489162 3193 log.go:172] (0xc000b95810) (0xc000aea000) Stream removed, broadcasting: 5\n" Mar 8 18:07:34.493: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 18:07:34.493: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 18:07:34.496: INFO: Found 1 stateful pods, waiting for 3 Mar 8 18:07:44.501: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 18:07:44.501: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 18:07:44.501: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 8 18:07:44.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 18:07:44.745: INFO: stderr: "I0308 18:07:44.646539 3214 log.go:172] (0xc00003afd0) (0xc0008dc6e0) Create stream\nI0308 18:07:44.646596 3214 log.go:172] (0xc00003afd0) (0xc0008dc6e0) Stream added, broadcasting: 1\nI0308 18:07:44.651198 3214 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0308 18:07:44.651242 3214 log.go:172] (0xc00003afd0) (0xc0005b7680) Create stream\nI0308 18:07:44.651259 3214 log.go:172] (0xc00003afd0) (0xc0005b7680) Stream added, broadcasting: 3\nI0308 18:07:44.654650 3214 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0308 18:07:44.654705 3214 log.go:172] (0xc00003afd0) (0xc000424aa0) Create stream\nI0308 18:07:44.654724 3214 log.go:172] (0xc00003afd0) (0xc000424aa0) Stream added, broadcasting: 5\nI0308 18:07:44.655857 3214 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0308 18:07:44.740889 3214 log.go:172] (0xc00003afd0) Data frame received for 5\nI0308 18:07:44.740931 3214 log.go:172] (0xc00003afd0) Data frame received for 3\nI0308 18:07:44.740965 3214 log.go:172] (0xc0005b7680) (3) Data frame handling\nI0308 18:07:44.740978 3214 log.go:172] (0xc000424aa0) (5) Data frame handling\nI0308 18:07:44.740995 3214 log.go:172] (0xc000424aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 18:07:44.741002 3214 log.go:172] (0xc00003afd0) Data frame received for 5\nI0308 18:07:44.741040 3214 log.go:172] (0xc000424aa0) (5) Data frame handling\nI0308 18:07:44.741054 3214 log.go:172] (0xc0005b7680) (3) Data frame sent\nI0308 18:07:44.741069 3214 log.go:172] (0xc00003afd0) Data frame received for 3\nI0308 18:07:44.741078 3214 log.go:172] (0xc0005b7680) (3) Data frame handling\nI0308 18:07:44.742312 3214 log.go:172] (0xc00003afd0) Data frame received for 1\nI0308 18:07:44.742330 3214 log.go:172] (0xc0008dc6e0) (1) Data frame handling\nI0308 18:07:44.742338 3214 log.go:172] (0xc0008dc6e0) (1) Data frame sent\nI0308 18:07:44.742352 3214 log.go:172] (0xc00003afd0) (0xc0008dc6e0) Stream removed, broadcasting: 1\nI0308 18:07:44.742433 3214 log.go:172] (0xc00003afd0) Go away received\nI0308 18:07:44.742658 3214 log.go:172] (0xc00003afd0) (0xc0008dc6e0) Stream removed, broadcasting: 1\nI0308 18:07:44.742680 3214 log.go:172] (0xc00003afd0) (0xc0005b7680) Stream removed, broadcasting: 3\nI0308 18:07:44.742687 3214 log.go:172] (0xc00003afd0) (0xc000424aa0) Stream removed, broadcasting: 5\n" Mar 8 18:07:44.746: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 18:07:44.746: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 18:07:44.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 18:07:44.999: INFO: stderr: "I0308 18:07:44.887898 3234 log.go:172] (0xc0009b8000) (0xc0009ae000) Create stream\nI0308 18:07:44.887957 3234 log.go:172] (0xc0009b8000) (0xc0009ae000) Stream added, broadcasting: 1\nI0308 18:07:44.890454 3234 log.go:172] (0xc0009b8000) Reply frame received for 1\nI0308 18:07:44.890503 3234 log.go:172] (0xc0009b8000) (0xc0009ae0a0) Create stream\nI0308 18:07:44.890518 3234 log.go:172] (0xc0009b8000) (0xc0009ae0a0) Stream added, broadcasting: 3\nI0308 18:07:44.891341 3234 log.go:172] (0xc0009b8000) Reply frame received for 3\nI0308 18:07:44.891369 3234 log.go:172] (0xc0009b8000) (0xc0006bf400) Create stream\nI0308 18:07:44.891380 3234 log.go:172] (0xc0009b8000) (0xc0006bf400) Stream added, broadcasting: 5\nI0308 18:07:44.892254 3234 log.go:172] (0xc0009b8000) Reply frame received for 5\nI0308 18:07:44.964971 3234 log.go:172] (0xc0009b8000) Data frame received for 5\nI0308 18:07:44.964998 3234 log.go:172] (0xc0006bf400) (5) Data frame handling\nI0308 18:07:44.965024 3234 log.go:172] (0xc0006bf400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 18:07:44.993876 3234 log.go:172] (0xc0009b8000) Data frame received for 3\nI0308 18:07:44.993908 3234 log.go:172] (0xc0009ae0a0) (3) Data frame handling\nI0308 18:07:44.993930 3234 log.go:172] (0xc0009ae0a0) (3) Data frame sent\nI0308 18:07:44.994072 3234 log.go:172] (0xc0009b8000) Data frame received for 3\nI0308 18:07:44.994097 3234 log.go:172] (0xc0009ae0a0) (3) Data frame handling\nI0308 18:07:44.994110 3234 log.go:172] (0xc0009b8000) Data frame received for 5\nI0308 18:07:44.994164 3234 log.go:172] (0xc0006bf400) (5) Data frame handling\nI0308 18:07:44.996101 3234 log.go:172] (0xc0009b8000) Data frame received for 1\nI0308 18:07:44.996119 3234 log.go:172] (0xc0009ae000) (1) Data frame handling\nI0308 18:07:44.996144 3234 log.go:172] (0xc0009ae000) (1) Data frame sent\nI0308 18:07:44.996164 3234 log.go:172] (0xc0009b8000) (0xc0009ae000) Stream removed, broadcasting: 1\nI0308 18:07:44.996193 3234 log.go:172] (0xc0009b8000) Go away received\nI0308 18:07:44.996601 3234 log.go:172] (0xc0009b8000) (0xc0009ae000) Stream removed, broadcasting: 1\nI0308 18:07:44.996622 3234 log.go:172] (0xc0009b8000) (0xc0009ae0a0) Stream removed, broadcasting: 3\nI0308 18:07:44.996628 3234 log.go:172] (0xc0009b8000) (0xc0006bf400) Stream removed, broadcasting: 5\n" Mar 8 18:07:45.000: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 18:07:45.000: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 18:07:45.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 18:07:45.227: INFO: stderr: "I0308 18:07:45.119369 3256 log.go:172] (0xc0009c6000) (0xc0003a4b40) Create stream\nI0308 18:07:45.119417 3256 log.go:172] (0xc0009c6000) (0xc0003a4b40) Stream added, broadcasting: 1\nI0308 18:07:45.121642 3256 log.go:172] (0xc0009c6000) Reply frame received for 1\nI0308 18:07:45.121686 3256 log.go:172] (0xc0009c6000) (0xc00099e000) Create stream\nI0308 18:07:45.121696 3256 log.go:172] (0xc0009c6000) (0xc00099e000) Stream added, broadcasting: 3\nI0308 18:07:45.122873 3256 log.go:172] (0xc0009c6000) Reply frame received for 3\nI0308 18:07:45.122905 3256 log.go:172] (0xc0009c6000) (0xc000a3e000) Create stream\nI0308 18:07:45.122917 3256 log.go:172] (0xc0009c6000) (0xc000a3e000) Stream added, broadcasting: 5\nI0308 18:07:45.123812 3256 log.go:172] (0xc0009c6000) Reply frame received for 5\nI0308 18:07:45.184131 3256 log.go:172] (0xc0009c6000) Data frame received for 5\nI0308 18:07:45.184153 3256 log.go:172] (0xc000a3e000) (5) Data frame handling\nI0308 18:07:45.184166 3256 log.go:172] (0xc000a3e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 18:07:45.221441 3256 log.go:172] (0xc0009c6000) Data frame received for 5\nI0308 18:07:45.221547 3256 log.go:172] (0xc000a3e000) (5) Data frame handling\nI0308 18:07:45.221602 3256 log.go:172] (0xc0009c6000) Data frame received for 3\nI0308 18:07:45.221623 3256 log.go:172] (0xc00099e000) (3) Data frame handling\nI0308 18:07:45.221637 3256 log.go:172] (0xc00099e000) (3) Data frame sent\nI0308 18:07:45.221648 3256 log.go:172] (0xc0009c6000) Data frame received for 3\nI0308 18:07:45.221660 3256 log.go:172] (0xc00099e000) (3) Data frame handling\nI0308 18:07:45.223276 3256 log.go:172] (0xc0009c6000) Data frame received for 1\nI0308 18:07:45.223305 3256 log.go:172] (0xc0003a4b40) (1) Data frame handling\nI0308 18:07:45.223319 3256 log.go:172] (0xc0003a4b40) (1) Data frame sent\nI0308 18:07:45.223414 3256 log.go:172] (0xc0009c6000) (0xc0003a4b40) Stream removed, broadcasting: 1\nI0308 18:07:45.223753 3256 log.go:172] (0xc0009c6000) (0xc0003a4b40) Stream removed, broadcasting: 1\nI0308 18:07:45.223771 3256 log.go:172] (0xc0009c6000) (0xc00099e000) Stream removed, broadcasting: 3\nI0308 18:07:45.223936 3256 log.go:172] (0xc0009c6000) Go away received\nI0308 18:07:45.223962 3256 log.go:172] (0xc0009c6000) (0xc000a3e000) Stream removed, broadcasting: 5\n" Mar 8 18:07:45.227: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 18:07:45.227: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 18:07:45.227: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 18:07:45.463: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 8 18:07:55.494: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 18:07:55.495: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 18:07:55.495: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 18:07:55.505: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999544s Mar 8 18:07:56.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995251195s Mar 8 18:07:57.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99075479s Mar 8 18:07:58.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985825287s Mar 8 18:07:59.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.981393052s Mar 8 18:08:00.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977162607s Mar 8 18:08:01.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.949383184s Mar 8 18:08:02.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.943252113s Mar 8 18:08:03.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938386254s Mar 8 18:08:04.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.440028ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2563 Mar 8 18:08:05.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 18:08:05.799: INFO: stderr: "I0308 18:08:05.729861 3276 log.go:172] (0xc0006fa000) (0xc0006af5e0) Create stream\nI0308 18:08:05.729917 3276 log.go:172] (0xc0006fa000) (0xc0006af5e0) Stream added, broadcasting: 1\nI0308 18:08:05.732226 3276 log.go:172] (0xc0006fa000) Reply frame received for 1\nI0308 18:08:05.732257 3276 log.go:172] (0xc0006fa000) (0xc00094e000) Create stream\nI0308 18:08:05.732267 3276 log.go:172] (0xc0006fa000) (0xc00094e000) Stream added, broadcasting: 3\nI0308 18:08:05.733077 3276 log.go:172] (0xc0006fa000) Reply frame received for 3\nI0308 18:08:05.733113 3276 log.go:172] (0xc0006fa000) (0xc0004e4aa0) Create stream\nI0308 18:08:05.733124 3276 log.go:172] (0xc0006fa000) (0xc0004e4aa0) Stream added, broadcasting: 5\nI0308 18:08:05.734219 3276 log.go:172] (0xc0006fa000) Reply frame received for 5\nI0308 18:08:05.793966 3276 log.go:172] (0xc0006fa000) Data frame received for 3\nI0308 18:08:05.793988 3276 log.go:172] (0xc00094e000) (3) Data frame handling\nI0308 18:08:05.794001 3276 log.go:172] (0xc00094e000) (3) Data frame sent\nI0308 18:08:05.794012 3276 log.go:172] (0xc0006fa000) Data frame received for 3\nI0308 18:08:05.794022 3276 log.go:172] (0xc00094e000) (3) Data frame handling\nI0308 18:08:05.794041 3276 log.go:172] (0xc0006fa000) Data frame received for 5\nI0308 18:08:05.794056 3276 log.go:172] (0xc0004e4aa0) (5) Data frame handling\nI0308 18:08:05.794066 3276 log.go:172] (0xc0004e4aa0) (5) Data frame sent\nI0308 18:08:05.794073 3276 log.go:172] (0xc0006fa000) Data frame received for 5\nI0308 18:08:05.794082 3276 log.go:172] (0xc0004e4aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 18:08:05.795423 3276 log.go:172] (0xc0006fa000) Data frame received for 1\nI0308 18:08:05.795445 3276 log.go:172] (0xc0006af5e0) (1) Data frame handling\nI0308 18:08:05.795457 3276 log.go:172] (0xc0006af5e0) (1) Data frame sent\nI0308 18:08:05.795473 3276 log.go:172] (0xc0006fa000) (0xc0006af5e0) Stream removed, broadcasting: 1\nI0308 18:08:05.795493 3276 log.go:172] (0xc0006fa000) Go away received\nI0308 18:08:05.795918 3276 log.go:172] (0xc0006fa000) (0xc0006af5e0) Stream removed, broadcasting: 1\nI0308 18:08:05.795944 3276 log.go:172] (0xc0006fa000) (0xc00094e000) Stream removed, broadcasting: 3\nI0308 18:08:05.795954 3276 log.go:172] (0xc0006fa000) (0xc0004e4aa0) Stream removed, broadcasting: 5\n" Mar 8 18:08:05.800: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 18:08:05.800: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 18:08:05.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 18:08:06.006: INFO: stderr: "I0308 18:08:05.932984 3297 log.go:172] (0xc0009e0fd0) (0xc0008f6500) Create stream\nI0308 18:08:05.933032 3297 log.go:172] (0xc0009e0fd0) (0xc0008f6500) Stream added, broadcasting: 1\nI0308 18:08:05.936711 3297 log.go:172] (0xc0009e0fd0) Reply frame received for 1\nI0308 18:08:05.936750 3297 log.go:172] (0xc0009e0fd0) (0xc0005615e0) Create stream\nI0308 18:08:05.936758 3297 log.go:172] (0xc0009e0fd0) (0xc0005615e0) Stream added, broadcasting: 3\nI0308 18:08:05.937439 3297 log.go:172] (0xc0009e0fd0) Reply frame received for 3\nI0308 18:08:05.937470 3297 log.go:172] (0xc0009e0fd0) (0xc0002aaa00) Create stream\nI0308 18:08:05.937482 3297 log.go:172] (0xc0009e0fd0) (0xc0002aaa00) Stream added, broadcasting: 5\nI0308 18:08:05.938228 3297 log.go:172] (0xc0009e0fd0) Reply frame received for 5\nI0308 18:08:06.001417 3297 log.go:172] (0xc0009e0fd0) Data frame received for 5\nI0308 18:08:06.001442 3297 log.go:172] (0xc0002aaa00) (5) Data frame handling\nI0308 18:08:06.001450 3297 log.go:172] (0xc0002aaa00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 18:08:06.001474 3297 log.go:172] (0xc0009e0fd0) Data frame received for 3\nI0308 18:08:06.001500 3297 log.go:172] (0xc0005615e0) (3) Data frame handling\nI0308 18:08:06.001515 3297 log.go:172] (0xc0005615e0) (3) Data frame sent\nI0308 18:08:06.001525 3297 log.go:172] (0xc0009e0fd0) Data frame received for 3\nI0308 18:08:06.001542 3297 log.go:172] (0xc0009e0fd0) Data frame received for 5\nI0308 18:08:06.001566 3297 log.go:172] (0xc0002aaa00) (5) Data frame handling\nI0308 18:08:06.001584 3297 log.go:172] (0xc0005615e0) (3) Data frame handling\nI0308 18:08:06.002777 3297 log.go:172] (0xc0009e0fd0) Data frame received for 1\nI0308 18:08:06.002791 3297 log.go:172] (0xc0008f6500) (1) Data frame handling\nI0308 18:08:06.002799 3297 log.go:172] (0xc0008f6500) (1) Data frame sent\nI0308 18:08:06.002808 3297 log.go:172] (0xc0009e0fd0) (0xc0008f6500) Stream removed, broadcasting: 1\nI0308 18:08:06.002818 3297 log.go:172] (0xc0009e0fd0) Go away received\nI0308 18:08:06.003158 3297 log.go:172] (0xc0009e0fd0) (0xc0008f6500) Stream removed, broadcasting: 1\nI0308 18:08:06.003175 3297 log.go:172] (0xc0009e0fd0) (0xc0005615e0) Stream removed, broadcasting: 3\nI0308 18:08:06.003183 3297 log.go:172] (0xc0009e0fd0) (0xc0002aaa00) Stream removed, broadcasting: 5\n" Mar 8 18:08:06.006: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 18:08:06.006: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 18:08:06.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2563 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 18:08:06.202: INFO: stderr: "I0308 18:08:06.134663 3319 log.go:172] (0xc0000e8370) (0xc0008da000) Create stream\nI0308 18:08:06.134709 3319 log.go:172] (0xc0000e8370) (0xc0008da000) Stream added, broadcasting: 1\nI0308 18:08:06.136168 3319 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0308 18:08:06.136197 3319 log.go:172] (0xc0000e8370) (0xc00080d180) Create stream\nI0308 18:08:06.136204 3319 log.go:172] (0xc0000e8370) (0xc00080d180) Stream added, broadcasting: 3\nI0308 18:08:06.136964 3319 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0308 18:08:06.136990 3319 log.go:172] (0xc0000e8370) (0xc00080d220) Create stream\nI0308 18:08:06.137002 3319 log.go:172] (0xc0000e8370) (0xc00080d220) Stream added, broadcasting: 5\nI0308 18:08:06.137711 3319 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0308 18:08:06.196624 3319 log.go:172] (0xc0000e8370) Data frame received for 3\nI0308 18:08:06.196664 3319 log.go:172] (0xc0000e8370) Data frame received for 5\nI0308 18:08:06.196699 3319 log.go:172] (0xc00080d220) (5) Data frame handling\nI0308 18:08:06.196711 3319 log.go:172] (0xc00080d220) (5) Data frame sent\nI0308 18:08:06.196719 3319 log.go:172] (0xc0000e8370) Data frame received for 5\nI0308 18:08:06.196729 3319 log.go:172] (0xc00080d220) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 18:08:06.196755 3319 log.go:172] (0xc00080d180) (3) Data frame handling\nI0308 18:08:06.196767 3319 log.go:172] (0xc00080d180) (3) Data frame sent\nI0308 18:08:06.196775 3319 log.go:172] (0xc0000e8370) Data frame received for 3\nI0308 18:08:06.196785 3319 log.go:172] (0xc00080d180) (3) Data frame handling\nI0308 18:08:06.197847 3319 log.go:172] (0xc0000e8370) Data frame received for 1\nI0308 18:08:06.197864 3319 log.go:172] (0xc0008da000) (1) Data frame handling\nI0308 18:08:06.197876 3319 log.go:172] (0xc0008da000) (1) Data frame sent\nI0308 18:08:06.197891 3319 log.go:172] (0xc0000e8370) (0xc0008da000) Stream removed, broadcasting: 1\nI0308 18:08:06.197908 3319 log.go:172] (0xc0000e8370) Go away received\nI0308 18:08:06.198274 3319 log.go:172] (0xc0000e8370) (0xc0008da000) Stream removed, broadcasting: 1\nI0308 18:08:06.198295 3319 log.go:172] (0xc0000e8370) (0xc00080d180) Stream removed, broadcasting: 3\nI0308 18:08:06.198304 3319 log.go:172] (0xc0000e8370) (0xc00080d220) Stream removed, broadcasting: 5\n" Mar 8 18:08:06.202: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 18:08:06.202: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 18:08:06.202: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 8 18:08:16.214: INFO: Deleting all statefulset in ns statefulset-2563 Mar 8 18:08:16.216: INFO: Scaling statefulset ss to 0 Mar 8 18:08:16.221: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 18:08:16.223: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:08:16.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2563" for this suite. • [SLOW TEST:72.389 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":272,"skipped":4681,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:08:16.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 8 18:08:16.302: INFO: Waiting up to 5m0s for pod "downward-api-151d224a-9c67-431a-883b-c4cc89c82e16" in namespace "downward-api-8057" to be "Succeeded or Failed" Mar 8 18:08:16.323: INFO: Pod "downward-api-151d224a-9c67-431a-883b-c4cc89c82e16": Phase="Pending", Reason="", readiness=false. Elapsed: 20.999071ms Mar 8 18:08:18.327: INFO: Pod "downward-api-151d224a-9c67-431a-883b-c4cc89c82e16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024939808s Mar 8 18:08:20.330: INFO: Pod "downward-api-151d224a-9c67-431a-883b-c4cc89c82e16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028450848s STEP: Saw pod success Mar 8 18:08:20.330: INFO: Pod "downward-api-151d224a-9c67-431a-883b-c4cc89c82e16" satisfied condition "Succeeded or Failed" Mar 8 18:08:20.333: INFO: Trying to get logs from node latest-worker pod downward-api-151d224a-9c67-431a-883b-c4cc89c82e16 container dapi-container: STEP: delete the pod Mar 8 18:08:20.385: INFO: Waiting for pod downward-api-151d224a-9c67-431a-883b-c4cc89c82e16 to disappear Mar 8 18:08:20.391: INFO: Pod downward-api-151d224a-9c67-431a-883b-c4cc89c82e16 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:08:20.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8057" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4691,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:08:20.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 8 18:08:20.448: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-watch-closed ba0037a8-577e-4aa2-83fc-cc9c935b2d6f 68275 0 2020-03-08 18:08:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 18:08:20.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-watch-closed ba0037a8-577e-4aa2-83fc-cc9c935b2d6f 68276 0 2020-03-08 18:08:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 8 18:08:20.459: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-watch-closed ba0037a8-577e-4aa2-83fc-cc9c935b2d6f 68277 0 2020-03-08 18:08:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 8 18:08:20.459: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-watch-closed ba0037a8-577e-4aa2-83fc-cc9c935b2d6f 68278 0 2020-03-08 18:08:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:08:20.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-48" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":274,"skipped":4704,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 8 18:08:20.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 8 18:08:20.543: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 8 18:08:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4903" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":275,"skipped":4711,"failed":0} SSSSSSSMar 8 18:08:21.124: INFO: Running AfterSuite actions on all nodes Mar 8 18:08:21.124: INFO: Running AfterSuite actions on node 1 Mar 8 18:08:21.124: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4718,"failed":0} Ran 275 of 4993 Specs in 3817.061 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4718 Skipped PASS