I0509 21:08:34.852929 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0509 21:08:34.853342 7 e2e.go:109] Starting e2e run "1731d905-1414-4d35-b0f3-4c9009c2d827" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589058513 - Will randomize all specs Will run 278 of 4842 specs May 9 21:08:34.907: INFO: >>> kubeConfig: /root/.kube/config May 9 21:08:34.914: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 9 21:08:34.935: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 9 21:08:34.980: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 9 21:08:34.980: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 9 21:08:34.980: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 9 21:08:34.990: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 9 21:08:34.990: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 9 21:08:34.990: INFO: e2e test version: v1.17.4 May 9 21:08:34.991: INFO: kube-apiserver version: v1.17.2 May 9 21:08:34.991: INFO: >>> kubeConfig: /root/.kube/config May 9 21:08:34.995: INFO: Cluster IP family: ipv4 S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:08:34.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi May 9 21:08:35.069: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 9 21:08:35.072: INFO: >>> kubeConfig: /root/.kube/config May 9 21:08:38.035: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:08:48.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3050" for this suite. • [SLOW TEST:13.620 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:08:48.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 9 21:08:58.832: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 21:08:58.864: INFO: Pod pod-with-poststart-http-hook still exists May 9 21:09:00.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 21:09:00.868: INFO: Pod pod-with-poststart-http-hook still exists May 9 21:09:02.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 21:09:02.869: INFO: Pod pod-with-poststart-http-hook still exists May 9 21:09:04.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 21:09:04.868: INFO: Pod pod-with-poststart-http-hook still exists May 9 21:09:06.865: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 21:09:06.868: INFO: Pod pod-with-poststart-http-hook still exists May 9 21:09:08.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 21:09:08.869: INFO: Pod pod-with-poststart-http-hook still exists May 9 21:09:10.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 21:09:10.868: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:09:10.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5883" for this suite. • [SLOW TEST:22.261 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":40,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:09:10.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6328 STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 21:09:10.919: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 21:09:37.083: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.220:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:09:37.083: INFO: >>> kubeConfig: /root/.kube/config I0509 21:09:37.123687 7 log.go:172] (0xc001bb62c0) (0xc0019170e0) Create stream I0509 21:09:37.123720 7 log.go:172] (0xc001bb62c0) (0xc0019170e0) Stream added, broadcasting: 1 I0509 21:09:37.126474 7 log.go:172] (0xc001bb62c0) Reply frame received for 1 I0509 21:09:37.126527 7 log.go:172] (0xc001bb62c0) (0xc001e41540) Create stream I0509 21:09:37.126545 7 log.go:172] (0xc001bb62c0) (0xc001e41540) Stream added, broadcasting: 3 I0509 21:09:37.127836 7 log.go:172] (0xc001bb62c0) Reply frame received for 3 I0509 21:09:37.127897 7 log.go:172] (0xc001bb62c0) (0xc001e41680) Create stream I0509 21:09:37.127911 7 log.go:172] (0xc001bb62c0) (0xc001e41680) Stream added, broadcasting: 5 I0509 21:09:37.128925 7 log.go:172] (0xc001bb62c0) Reply frame received for 5 I0509 21:09:37.226635 7 log.go:172] (0xc001bb62c0) Data frame received for 3 I0509 21:09:37.226675 7 log.go:172] (0xc001e41540) (3) Data frame handling I0509 21:09:37.226696 7 log.go:172] (0xc001e41540) (3) Data frame sent I0509 21:09:37.226740 7 log.go:172] (0xc001bb62c0) Data frame received for 5 I0509 21:09:37.226774 7 log.go:172] (0xc001e41680) (5) Data frame handling I0509 21:09:37.226804 7 log.go:172] (0xc001bb62c0) Data frame received for 3 I0509 21:09:37.226822 7 log.go:172] (0xc001e41540) (3) Data frame handling I0509 21:09:37.228140 7 log.go:172] (0xc001bb62c0) Data frame received for 1 I0509 21:09:37.228165 7 log.go:172] (0xc0019170e0) (1) Data frame handling I0509 21:09:37.228177 7 log.go:172] (0xc0019170e0) (1) Data frame sent I0509 21:09:37.228191 7 log.go:172] (0xc001bb62c0) (0xc0019170e0) Stream removed, broadcasting: 1 I0509 21:09:37.228225 7 log.go:172] (0xc001bb62c0) Go away received I0509 21:09:37.228621 7 log.go:172] (0xc001bb62c0) (0xc0019170e0) Stream removed, broadcasting: 1 I0509 21:09:37.228636 7 log.go:172] (0xc001bb62c0) (0xc001e41540) Stream removed, broadcasting: 3 I0509 21:09:37.228644 7 log.go:172] (0xc001bb62c0) (0xc001e41680) Stream removed, broadcasting: 5 May 9 21:09:37.228: INFO: Found all expected endpoints: [netserver-0] May 9 21:09:37.231: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.107:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:09:37.231: INFO: >>> kubeConfig: /root/.kube/config I0509 21:09:37.259777 7 log.go:172] (0xc00218a2c0) (0xc001b9c1e0) Create stream I0509 21:09:37.259799 7 log.go:172] (0xc00218a2c0) (0xc001b9c1e0) Stream added, broadcasting: 1 I0509 21:09:37.261791 7 log.go:172] (0xc00218a2c0) Reply frame received for 1 I0509 21:09:37.261825 7 log.go:172] (0xc00218a2c0) (0xc0029fd0e0) Create stream I0509 21:09:37.261832 7 log.go:172] (0xc00218a2c0) (0xc0029fd0e0) Stream added, broadcasting: 3 I0509 21:09:37.262611 7 log.go:172] (0xc00218a2c0) Reply frame received for 3 I0509 21:09:37.262664 7 log.go:172] (0xc00218a2c0) (0xc001e417c0) Create stream I0509 21:09:37.262695 7 log.go:172] (0xc00218a2c0) (0xc001e417c0) Stream added, broadcasting: 5 I0509 21:09:37.263415 7 log.go:172] (0xc00218a2c0) Reply frame received for 5 I0509 21:09:37.340595 7 log.go:172] (0xc00218a2c0) Data frame received for 3 I0509 21:09:37.340621 7 log.go:172] (0xc0029fd0e0) (3) Data frame handling I0509 21:09:37.340628 7 log.go:172] (0xc0029fd0e0) (3) Data frame sent I0509 21:09:37.340633 7 log.go:172] (0xc00218a2c0) Data frame received for 3 I0509 21:09:37.340637 7 log.go:172] (0xc0029fd0e0) (3) Data frame handling I0509 21:09:37.340655 7 log.go:172] (0xc00218a2c0) Data frame received for 5 I0509 21:09:37.340665 7 log.go:172] (0xc001e417c0) (5) Data frame handling I0509 21:09:37.342222 7 log.go:172] (0xc00218a2c0) Data frame received for 1 I0509 21:09:37.342253 7 log.go:172] (0xc001b9c1e0) (1) Data frame handling I0509 21:09:37.342264 7 log.go:172] (0xc001b9c1e0) (1) Data frame sent I0509 21:09:37.342276 7 log.go:172] (0xc00218a2c0) (0xc001b9c1e0) Stream removed, broadcasting: 1 I0509 21:09:37.342289 7 log.go:172] (0xc00218a2c0) Go away received I0509 21:09:37.342432 7 log.go:172] (0xc00218a2c0) (0xc001b9c1e0) Stream removed, broadcasting: 1 I0509 21:09:37.342454 7 log.go:172] (0xc00218a2c0) (0xc0029fd0e0) Stream removed, broadcasting: 3 I0509 21:09:37.342471 7 log.go:172] (0xc00218a2c0) (0xc001e417c0) Stream removed, broadcasting: 5 May 9 21:09:37.342: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:09:37.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6328" for this suite. • [SLOW TEST:26.475 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":42,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:09:37.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:09:37.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9" in namespace "downward-api-4662" to be "success or failure" May 9 21:09:37.440: INFO: Pod "downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435163ms May 9 21:09:39.454: INFO: Pod "downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017823008s May 9 21:09:41.458: INFO: Pod "downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021706872s STEP: Saw pod success May 9 21:09:41.458: INFO: Pod "downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9" satisfied condition "success or failure" May 9 21:09:41.461: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9 container client-container: STEP: delete the pod May 9 21:09:41.483: INFO: Waiting for pod downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9 to disappear May 9 21:09:41.512: INFO: Pod downwardapi-volume-cab6273b-a371-4d6a-bcf8-cf84f09ff8e9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:09:41.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4662" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:09:41.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ce7addc9-d29a-4996-90f6-55e8bb420263 STEP: Creating a pod to test consume configMaps May 9 21:09:41.604: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df" in namespace "projected-8766" to be "success or failure" May 9 21:09:41.620: INFO: Pod "pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df": Phase="Pending", Reason="", readiness=false. Elapsed: 15.12016ms May 9 21:09:43.738: INFO: Pod "pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133372973s May 9 21:09:45.862: INFO: Pod "pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257204154s May 9 21:09:47.867: INFO: Pod "pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.262364679s STEP: Saw pod success May 9 21:09:47.867: INFO: Pod "pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df" satisfied condition "success or failure" May 9 21:09:47.871: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df container projected-configmap-volume-test: STEP: delete the pod May 9 21:09:47.918: INFO: Waiting for pod pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df to disappear May 9 21:09:47.922: INFO: Pod pod-projected-configmaps-919611b0-0922-4063-8530-e3bd441712df no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:09:47.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8766" for this suite. • [SLOW TEST:6.409 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:09:47.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-a04bf102-2b4b-4402-b0ac-614e53274b0b in namespace container-probe-8711 May 9 21:09:52.141: INFO: Started pod busybox-a04bf102-2b4b-4402-b0ac-614e53274b0b in namespace container-probe-8711 STEP: checking the pod's current state and verifying that restartCount is present May 9 21:09:52.144: INFO: Initial restart count of pod busybox-a04bf102-2b4b-4402-b0ac-614e53274b0b is 0 May 9 21:10:42.292: INFO: Restart count of pod container-probe-8711/busybox-a04bf102-2b4b-4402-b0ac-614e53274b0b is now 1 (50.148365657s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:10:42.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8711" for this suite. • [SLOW TEST:54.384 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":133,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:10:42.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:10:48.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4632" for this suite. STEP: Destroying namespace "nsdeletetest-5953" for this suite. May 9 21:10:48.615: INFO: Namespace nsdeletetest-5953 was already deleted STEP: Destroying namespace "nsdeletetest-5507" for this suite. • [SLOW TEST:6.306 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":7,"skipped":136,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:10:48.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:10:49.706: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:10:51.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:10:53.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655449, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:10:56.754: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 9 21:10:56.780: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:10:56.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2880" for this suite. STEP: Destroying namespace "webhook-2880-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.314 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":8,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:10:56.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:10:58.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:11:00.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:11:03.163: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:11:03.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:11:04.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2833" for this suite. STEP: Destroying namespace "webhook-2833-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.530 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":9,"skipped":201,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:11:04.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:11:04.531: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:11:06.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9374" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":10,"skipped":210,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:11:06.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:11:06.727: INFO: Creating deployment "webserver-deployment" May 9 21:11:06.736: INFO: Waiting for observed generation 1 May 9 21:11:08.846: INFO: Waiting for all required pods to come up May 9 21:11:08.893: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 9 21:11:21.086: INFO: Waiting for deployment "webserver-deployment" to complete May 9 21:11:21.099: INFO: Updating deployment "webserver-deployment" with a non-existent image May 9 21:11:21.104: INFO: Updating deployment webserver-deployment May 9 21:11:21.104: INFO: Waiting for observed generation 2 May 9 21:11:23.450: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 9 21:11:23.453: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 9 21:11:23.460: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 9 21:11:23.468: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 9 21:11:23.468: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 9 21:11:23.470: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 9 21:11:23.472: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 9 21:11:23.472: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 9 21:11:23.477: INFO: Updating deployment webserver-deployment May 9 21:11:23.477: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 9 21:11:23.667: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 9 21:11:23.708: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 9 21:11:26.613: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1661 /apis/apps/v1/namespaces/deployment-1661/deployments/webserver-deployment 70bff6db-49c3-4811-bc5e-db863b15a340 14793178 3 2020-05-09 21:11:06 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00324b038 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-09 21:11:23 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-09 21:11:24 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 9 21:11:26.714: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-1661 /apis/apps/v1/namespaces/deployment-1661/replicasets/webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 14793174 3 2020-05-09 21:11:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 70bff6db-49c3-4811-bc5e-db863b15a340 0xc002b89557 0xc002b89558}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b895c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 9 21:11:26.714: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 9 21:11:26.714: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-1661 /apis/apps/v1/namespaces/deployment-1661/replicasets/webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 14793169 3 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 70bff6db-49c3-4811-bc5e-db863b15a340 0xc002b89487 0xc002b89488}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b894e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 9 21:11:26.759: INFO: Pod "webserver-deployment-595b5b9587-2f9bx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2f9bx webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-2f9bx 826bf571-04d5-491a-b19e-7ebc6eb8d5cd 14793201 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002286937 0xc002286938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.760: INFO: Pod "webserver-deployment-595b5b9587-2fndh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2fndh webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-2fndh 0712d8cf-fcc9-4ea8-9b0b-0bab8988665d 14793185 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002286aa7 0xc002286aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.760: INFO: Pod "webserver-deployment-595b5b9587-2jdtb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2jdtb webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-2jdtb 0e37c212-f292-4202-9efd-85c72fdee04a 14793027 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002286c07 0xc002286c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.226,StartTime:2020-05-09 21:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4e56249ae305554a5d07d496b1f325d4b052d490b447b7c8c5b3906a43bae30c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.761: INFO: Pod "webserver-deployment-595b5b9587-2k5c8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2k5c8 webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-2k5c8 ad14ccf8-dd2e-40bb-a48c-83a771ca0adb 14793215 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002286d87 0xc002286d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.761: INFO: Pod "webserver-deployment-595b5b9587-5b4zk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5b4zk webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-5b4zk d6e0c76a-11e8-4417-8d28-bb100e1560e9 14793182 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002286ee7 0xc002286ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.762: INFO: Pod "webserver-deployment-595b5b9587-7vkx7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7vkx7 webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-7vkx7 8dfb0a5e-1a92-497c-9e72-5ca07d319d55 14793219 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287047 0xc002287048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.762: INFO: Pod "webserver-deployment-595b5b9587-7x8tt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7x8tt webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-7x8tt b614abcb-2bbc-4143-b344-cc48e0c69806 14793000 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc0022871a7 0xc0022871a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.224,StartTime:2020-05-09 21:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9c8596ed14554196bd37170c54af4aa387a37f4d96b241572278b16bd2a3aa85,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.762: INFO: Pod "webserver-deployment-595b5b9587-89298" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-89298 webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-89298 363e6315-efb4-49bd-962f-8b6b592b4090 14793008 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287327 0xc002287328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.114,StartTime:2020-05-09 21:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bfae0a512fb5420033285add28163901594f0e5997329000b275b777e380cf70,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.763: INFO: Pod "webserver-deployment-595b5b9587-9kssl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9kssl webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-9kssl 3eef1774-5a12-4706-9f08-f9c7d4a04b93 14793216 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc0022874a7 0xc0022874a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.763: INFO: Pod "webserver-deployment-595b5b9587-dgkf4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dgkf4 webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-dgkf4 631c8c73-7571-447f-b4c3-cbccda4c2003 14793231 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287607 0xc002287608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.763: INFO: Pod "webserver-deployment-595b5b9587-flxnt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-flxnt webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-flxnt 71ec848a-4dfd-456b-a3ea-ec212f154025 14793007 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287767 0xc002287768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.222,StartTime:2020-05-09 21:11:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1ac03cf5661e2d626bb868a0bb23def5dd093e39ada51718b6ed445dca8fa858,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.763: INFO: Pod "webserver-deployment-595b5b9587-gzv5r" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gzv5r webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-gzv5r 2523021c-7019-4bec-9045-1cc8011c6469 14793018 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc0022878e7 0xc0022878e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.115,StartTime:2020-05-09 21:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0007cf719c8b1b5e8e0043c75cbe920a08cb5af783728d77e69506fc721dd1f8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.763: INFO: Pod "webserver-deployment-595b5b9587-hvtn6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hvtn6 webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-hvtn6 72a8dcb0-e84c-4835-ae04-28e93cb95f35 14792983 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287a67 0xc002287a68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.223,StartTime:2020-05-09 21:11:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://75d137cb990d00f89b64817db016a68939b7e27d98fd472b86aaf7e9ec8d6d05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.764: INFO: Pod "webserver-deployment-595b5b9587-mw2vr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mw2vr webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-mw2vr 5eadc89d-2885-439e-bde9-be3f15eec640 14792998 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287be7 0xc002287be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.113,StartTime:2020-05-09 21:11:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a6b10868f80e10d3347d118947483aec3db5d15442453fddda9cc1a2c58fb65b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.764: INFO: Pod "webserver-deployment-595b5b9587-mx7dt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mx7dt webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-mx7dt 1e07731e-5bf1-4356-8a3a-b807d8b32aa9 14793192 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287d67 0xc002287d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.764: INFO: Pod "webserver-deployment-595b5b9587-qzrf8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qzrf8 webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-qzrf8 7f4bd8f4-a74b-44af-8004-e2bfaffcc189 14793243 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002287ed7 0xc002287ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.764: INFO: Pod "webserver-deployment-595b5b9587-rjqvv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rjqvv webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-rjqvv 60385a57-de3c-41e6-8a82-189a590eeab2 14793165 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc002a360e7 0xc002a360e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.764: INFO: Pod "webserver-deployment-595b5b9587-rvmht" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rvmht webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-rvmht 100a77dd-2eea-4236-9e3d-2ebd3978b4da 14793024 0 2020-05-09 21:11:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc000e5a0b7 0xc000e5a0b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.225,StartTime:2020-05-09 21:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0bf0ec598d066c82b3c9a363ac02bd3c5a492d43c40e0f71ada6a0ebc93ec71d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.764: INFO: Pod "webserver-deployment-595b5b9587-sx86m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sx86m webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-sx86m 0897316f-5b43-47d9-a08b-e1da4cdf6149 14793204 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc000e5a2f7 0xc000e5a2f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.765: INFO: Pod "webserver-deployment-595b5b9587-wvwhg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wvwhg webserver-deployment-595b5b9587- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-595b5b9587-wvwhg 19c76354-019a-4d00-9e75-796d9f9bfb5e 14793236 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 33d2d112-f2ac-4350-9204-defee12bc8d3 0xc000e5a5c7 0xc000e5a5c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.765: INFO: Pod "webserver-deployment-c7997dcc8-4thrz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4thrz webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-4thrz 695b7cf5-32e2-42e5-931b-d0cbff2a70c0 14793253 0 2020-05-09 21:11:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000e5a757 0xc000e5a758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.228,StartTime:2020-05-09 21:11:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.765: INFO: Pod "webserver-deployment-c7997dcc8-58dgh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-58dgh webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-58dgh 260d3f00-2cec-47ce-b585-d270ffa7b056 14793248 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000e5ab07 0xc000e5ab08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.765: INFO: Pod "webserver-deployment-c7997dcc8-72hhl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-72hhl webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-72hhl b2b59d52-3c7d-4663-8c0d-bd29a82e96b7 14793200 0 2020-05-09 21:11:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc0005e2447 0xc0005e2448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.118,StartTime:2020-05-09 21:11:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.766: INFO: Pod "webserver-deployment-c7997dcc8-8rvqv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8rvqv webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-8rvqv ec3e6ff8-11e9-4a57-ac24-e2072b69cd06 14793093 0 2020-05-09 21:11:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc0005e2b37 0xc0005e2b38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.766: INFO: Pod "webserver-deployment-c7997dcc8-9v2gn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9v2gn webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-9v2gn da081c09-277f-4d8c-963e-471b6155927d 14793244 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc0005e3507 0xc0005e3508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.766: INFO: Pod "webserver-deployment-c7997dcc8-ctb24" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ctb24 webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-ctb24 381f80fa-8583-4d4d-99a9-e08a59f5211e 14793209 0 2020-05-09 21:11:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc0005e3ed7 0xc0005e3ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.227,StartTime:2020-05-09 21:11:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.766: INFO: Pod "webserver-deployment-c7997dcc8-lrt87" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lrt87 webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-lrt87 6959683d-78fb-4e82-9e87-b644ecd2869d 14793194 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000512267 0xc000512268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.766: INFO: Pod "webserver-deployment-c7997dcc8-nsknp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nsknp webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-nsknp d0ac223d-6f80-4207-90d8-0cfb26ba6276 14793254 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000512487 0xc000512488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.767: INFO: Pod "webserver-deployment-c7997dcc8-tz2ps" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tz2ps webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-tz2ps 286bf62d-9df9-4e83-bfb8-af195a7438ef 14793170 0 2020-05-09 21:11:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000512877 0xc000512878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.767: INFO: Pod "webserver-deployment-c7997dcc8-wfkrb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wfkrb webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-wfkrb eb2ec991-4447-47e9-b534-18d999cc83ab 14793249 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000512b47 0xc000512b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.767: INFO: Pod "webserver-deployment-c7997dcc8-wqcn9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wqcn9 webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-wqcn9 c8833298-8aa8-4440-b6d1-41f7a0afe799 14793173 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000318417 0xc000318418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.767: INFO: Pod "webserver-deployment-c7997dcc8-xg5hd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xg5hd webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-xg5hd 7a533092-4385-496a-a545-460e9f2d220a 14793213 0 2020-05-09 21:11:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000318867 0xc000318868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:11:26.767: INFO: Pod "webserver-deployment-c7997dcc8-xmc7d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xmc7d webserver-deployment-c7997dcc8- deployment-1661 /api/v1/namespaces/deployment-1661/pods/webserver-deployment-c7997dcc8-xmc7d fd38ba93-f660-4e79-b3a7-7aba0ba4155c 14793102 0 2020-05-09 21:11:21 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0b071a02-4d4e-4ea8-88b6-93b776272f6c 0xc000319fd7 0xc000319fd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvvmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:11:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:11:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:11:26.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1661" for this suite. • [SLOW TEST:20.374 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":11,"skipped":215,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:11:26.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 9 21:11:29.301: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 9 21:11:31.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655488, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:11:33.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655488, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:11:35.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655488, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:11:37.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655488, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:11:39.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655488, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:11:41.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655489, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655488, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:11:44.495: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:11:44.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:11:46.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7629" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:20.267 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":12,"skipped":220,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:11:47.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:12:03.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-983" for this suite. STEP: Destroying namespace "nsdeletetest-3314" for this suite. May 9 21:12:03.350: INFO: Namespace nsdeletetest-3314 was already deleted STEP: Destroying namespace "nsdeletetest-4706" for this suite. • [SLOW TEST:16.089 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":13,"skipped":223,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:12:03.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:12:03.422: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 9 21:12:03.429: INFO: Pod name sample-pod: Found 0 pods out of 1 May 9 21:12:08.523: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 9 21:12:08.523: INFO: Creating deployment "test-rolling-update-deployment" May 9 21:12:08.564: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 9 21:12:08.616: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 9 21:12:10.623: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 9 21:12:10.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655528, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655528, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655528, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655528, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:12:12.780: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 9 21:12:12.787: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4266 /apis/apps/v1/namespaces/deployment-4266/deployments/test-rolling-update-deployment b7328825-a980-4209-9c25-b6ca95278a49 14793750 1 2020-05-09 21:12:08 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00406c598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-09 21:12:08 +0000 UTC,LastTransitionTime:2020-05-09 21:12:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-09 21:12:12 +0000 UTC,LastTransitionTime:2020-05-09 21:12:08 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 9 21:12:12.790: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-4266 /apis/apps/v1/namespaces/deployment-4266/replicasets/test-rolling-update-deployment-67cf4f6444 82a0c8b3-82e8-4f0c-94a5-359a141c2af6 14793739 1 2020-05-09 21:12:08 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b7328825-a980-4209-9c25-b6ca95278a49 0xc00406ca27 0xc00406ca28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00406ca98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 9 21:12:12.790: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 9 21:12:12.790: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4266 /apis/apps/v1/namespaces/deployment-4266/replicasets/test-rolling-update-controller 94ecab2a-4a1b-441a-a239-8b6f7520c5f6 14793748 2 2020-05-09 21:12:03 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b7328825-a980-4209-9c25-b6ca95278a49 0xc00406c93f 0xc00406c950}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00406c9b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 9 21:12:12.792: INFO: Pod "test-rolling-update-deployment-67cf4f6444-7lr2v" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-7lr2v test-rolling-update-deployment-67cf4f6444- deployment-4266 /api/v1/namespaces/deployment-4266/pods/test-rolling-update-deployment-67cf4f6444-7lr2v be37fbcc-3886-49e9-b934-3f3d0430b6d6 14793738 0 2020-05-09 21:12:08 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 82a0c8b3-82e8-4f0c-94a5-359a141c2af6 0xc005750e57 0xc005750e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b5b5n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b5b5n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b5b5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:12:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:12:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:12:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.239,StartTime:2020-05-09 21:12:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:12:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0ac0db24f52b0d6c87cd741f4d66a41158da173099a29ce9fc8e964caafe8cc6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:12:12.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4266" for this suite. • [SLOW TEST:9.443 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":14,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:12:12.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3960 STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 21:12:12.855: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 21:12:35.030: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.135:8080/dial?request=hostname&protocol=udp&host=10.244.1.240&port=8081&tries=1'] Namespace:pod-network-test-3960 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:12:35.030: INFO: >>> kubeConfig: /root/.kube/config I0509 21:12:35.061635 7 log.go:172] (0xc0008fe2c0) (0xc001671b80) Create stream I0509 21:12:35.061659 7 log.go:172] (0xc0008fe2c0) (0xc001671b80) Stream added, broadcasting: 1 I0509 21:12:35.063945 7 log.go:172] (0xc0008fe2c0) Reply frame received for 1 I0509 21:12:35.063993 7 log.go:172] (0xc0008fe2c0) (0xc0029fdae0) Create stream I0509 21:12:35.064008 7 log.go:172] (0xc0008fe2c0) (0xc0029fdae0) Stream added, broadcasting: 3 I0509 21:12:35.064904 7 log.go:172] (0xc0008fe2c0) Reply frame received for 3 I0509 21:12:35.064940 7 log.go:172] (0xc0008fe2c0) (0xc001e41680) Create stream I0509 21:12:35.064956 7 log.go:172] (0xc0008fe2c0) (0xc001e41680) Stream added, broadcasting: 5 I0509 21:12:35.066088 7 log.go:172] (0xc0008fe2c0) Reply frame received for 5 I0509 21:12:35.145021 7 log.go:172] (0xc0008fe2c0) Data frame received for 3 I0509 21:12:35.145073 7 log.go:172] (0xc0029fdae0) (3) Data frame handling I0509 21:12:35.145102 7 log.go:172] (0xc0029fdae0) (3) Data frame sent I0509 21:12:35.146392 7 log.go:172] (0xc0008fe2c0) Data frame received for 3 I0509 21:12:35.146408 7 log.go:172] (0xc0029fdae0) (3) Data frame handling I0509 21:12:35.147553 7 log.go:172] (0xc0008fe2c0) Data frame received for 5 I0509 21:12:35.147579 7 log.go:172] (0xc001e41680) (5) Data frame handling I0509 21:12:35.149032 7 log.go:172] (0xc0008fe2c0) Data frame received for 1 I0509 21:12:35.149047 7 log.go:172] (0xc001671b80) (1) Data frame handling I0509 21:12:35.149054 7 log.go:172] (0xc001671b80) (1) Data frame sent I0509 21:12:35.149063 7 log.go:172] (0xc0008fe2c0) (0xc001671b80) Stream removed, broadcasting: 1 I0509 21:12:35.149075 7 log.go:172] (0xc0008fe2c0) Go away received I0509 21:12:35.149332 7 log.go:172] (0xc0008fe2c0) (0xc001671b80) Stream removed, broadcasting: 1 I0509 21:12:35.149350 7 log.go:172] (0xc0008fe2c0) (0xc0029fdae0) Stream removed, broadcasting: 3 I0509 21:12:35.149356 7 log.go:172] (0xc0008fe2c0) (0xc001e41680) Stream removed, broadcasting: 5 May 9 21:12:35.149: INFO: Waiting for responses: map[] May 9 21:12:35.152: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.135:8080/dial?request=hostname&protocol=udp&host=10.244.2.134&port=8081&tries=1'] Namespace:pod-network-test-3960 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:12:35.152: INFO: >>> kubeConfig: /root/.kube/config I0509 21:12:35.179957 7 log.go:172] (0xc00265f130) (0xc001e41ae0) Create stream I0509 21:12:35.179984 7 log.go:172] (0xc00265f130) (0xc001e41ae0) Stream added, broadcasting: 1 I0509 21:12:35.186887 7 log.go:172] (0xc00265f130) Reply frame received for 1 I0509 21:12:35.186994 7 log.go:172] (0xc00265f130) (0xc0029fdb80) Create stream I0509 21:12:35.187065 7 log.go:172] (0xc00265f130) (0xc0029fdb80) Stream added, broadcasting: 3 I0509 21:12:35.190970 7 log.go:172] (0xc00265f130) Reply frame received for 3 I0509 21:12:35.191044 7 log.go:172] (0xc00265f130) (0xc0028ff400) Create stream I0509 21:12:35.191079 7 log.go:172] (0xc00265f130) (0xc0028ff400) Stream added, broadcasting: 5 I0509 21:12:35.192450 7 log.go:172] (0xc00265f130) Reply frame received for 5 I0509 21:12:35.255931 7 log.go:172] (0xc00265f130) Data frame received for 3 I0509 21:12:35.255956 7 log.go:172] (0xc0029fdb80) (3) Data frame handling I0509 21:12:35.255970 7 log.go:172] (0xc0029fdb80) (3) Data frame sent I0509 21:12:35.256138 7 log.go:172] (0xc00265f130) Data frame received for 3 I0509 21:12:35.256152 7 log.go:172] (0xc0029fdb80) (3) Data frame handling I0509 21:12:35.256246 7 log.go:172] (0xc00265f130) Data frame received for 5 I0509 21:12:35.256260 7 log.go:172] (0xc0028ff400) (5) Data frame handling I0509 21:12:35.257626 7 log.go:172] (0xc00265f130) Data frame received for 1 I0509 21:12:35.257641 7 log.go:172] (0xc001e41ae0) (1) Data frame handling I0509 21:12:35.257653 7 log.go:172] (0xc001e41ae0) (1) Data frame sent I0509 21:12:35.257674 7 log.go:172] (0xc00265f130) (0xc001e41ae0) Stream removed, broadcasting: 1 I0509 21:12:35.257721 7 log.go:172] (0xc00265f130) Go away received I0509 21:12:35.257744 7 log.go:172] (0xc00265f130) (0xc001e41ae0) Stream removed, broadcasting: 1 I0509 21:12:35.257755 7 log.go:172] (0xc00265f130) (0xc0029fdb80) Stream removed, broadcasting: 3 I0509 21:12:35.257763 7 log.go:172] (0xc00265f130) (0xc0028ff400) Stream removed, broadcasting: 5 May 9 21:12:35.257: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:12:35.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3960" for this suite. • [SLOW TEST:22.466 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:12:35.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 9 21:12:39.855: INFO: Successfully updated pod "labelsupdate98652fe0-8188-46c8-92e0-6225d222b695" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:12:41.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9364" for this suite. • [SLOW TEST:6.614 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":322,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:12:41.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-513 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-513 to expose endpoints map[] May 9 21:12:42.250: INFO: Get endpoints failed (10.494564ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 9 21:12:43.253: INFO: successfully validated that service multi-endpoint-test in namespace services-513 exposes endpoints map[] (1.014029605s elapsed) STEP: Creating pod pod1 in namespace services-513 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-513 to expose endpoints map[pod1:[100]] May 9 21:12:46.399: INFO: successfully validated that service multi-endpoint-test in namespace services-513 exposes endpoints map[pod1:[100]] (3.129360853s elapsed) STEP: Creating pod pod2 in namespace services-513 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-513 to expose endpoints map[pod1:[100] pod2:[101]] May 9 21:12:50.679: INFO: successfully validated that service multi-endpoint-test in namespace services-513 exposes endpoints map[pod1:[100] pod2:[101]] (4.276405795s elapsed) STEP: Deleting pod pod1 in namespace services-513 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-513 to expose endpoints map[pod2:[101]] May 9 21:12:51.752: INFO: successfully validated that service multi-endpoint-test in namespace services-513 exposes endpoints map[pod2:[101]] (1.054762151s elapsed) STEP: Deleting pod pod2 in namespace services-513 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-513 to expose endpoints map[] May 9 21:12:52.816: INFO: successfully validated that service multi-endpoint-test in namespace services-513 exposes endpoints map[] (1.058413086s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:12:52.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-513" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.116 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":17,"skipped":327,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:12:52.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b13d3369-7926-40dc-affe-d5c2b297fca0 STEP: Creating secret with name s-test-opt-upd-b2f5db25-485f-4cdd-b3b3-c4207f06a55c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b13d3369-7926-40dc-affe-d5c2b297fca0 STEP: Updating secret s-test-opt-upd-b2f5db25-485f-4cdd-b3b3-c4207f06a55c STEP: Creating secret with name s-test-opt-create-1fec2312-d635-4bee-80fb-3a2643a52596 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:14:30.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1594" for this suite. • [SLOW TEST:97.749 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:14:30.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:14:31.558: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:14:33.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:14:35.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:14:38.760: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:14:38.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8285" for this suite. STEP: Destroying namespace "webhook-8285-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.235 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":19,"skipped":389,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:14:38.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:14:39.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f" in namespace "projected-7411" to be "success or failure" May 9 21:14:39.117: INFO: Pod "downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.494233ms May 9 21:14:41.121: INFO: Pod "downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051990899s May 9 21:14:43.127: INFO: Pod "downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058436533s STEP: Saw pod success May 9 21:14:43.127: INFO: Pod "downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f" satisfied condition "success or failure" May 9 21:14:43.131: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f container client-container: STEP: delete the pod May 9 21:14:43.155: INFO: Waiting for pod downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f to disappear May 9 21:14:43.165: INFO: Pod downwardapi-volume-803aef8e-b026-4cfb-bc8c-304be6da469f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:14:43.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7411" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:14:43.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 9 21:14:43.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4383' May 9 21:14:45.869: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 21:14:45.869: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 9 21:14:45.874: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 9 21:14:45.890: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 9 21:14:45.919: INFO: scanned /root for discovery docs: May 9 21:14:45.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4383' May 9 21:15:01.833: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 9 21:15:01.833: INFO: stdout: "Created e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939\nScaling up e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 9 21:15:01.833: INFO: stdout: "Created e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939\nScaling up e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 9 21:15:01.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4383' May 9 21:15:01.931: INFO: stderr: "" May 9 21:15:01.931: INFO: stdout: "e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939-z26x7 " May 9 21:15:01.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939-z26x7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4383' May 9 21:15:02.027: INFO: stderr: "" May 9 21:15:02.027: INFO: stdout: "true" May 9 21:15:02.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939-z26x7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4383' May 9 21:15:02.152: INFO: stderr: "" May 9 21:15:02.152: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 9 21:15:02.153: INFO: e2e-test-httpd-rc-d2d391b38a38b90c0fea8b7e367a5939-z26x7 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 9 21:15:02.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4383' May 9 21:15:02.260: INFO: stderr: "" May 9 21:15:02.260: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:15:02.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4383" for this suite. • [SLOW TEST:19.122 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":21,"skipped":424,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:15:02.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:15:18.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-402" for this suite. • [SLOW TEST:16.164 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":22,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:15:18.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:15:19.044: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:15:21.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655719, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724655719, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:15:24.333: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:15:24.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4576-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:15:25.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1871" for this suite. STEP: Destroying namespace "webhook-1871-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.212 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":23,"skipped":478,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:15:25.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:15:29.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-98" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":490,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:15:29.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-r5vd STEP: Creating a pod to test atomic-volume-subpath May 9 21:15:30.041: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-r5vd" in namespace "subpath-9740" to be "success or failure" May 9 21:15:30.061: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.993959ms May 9 21:15:32.142: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100368245s May 9 21:15:34.146: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 4.105031144s May 9 21:15:36.151: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 6.109266799s May 9 21:15:38.155: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 8.113345599s May 9 21:15:40.158: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 10.116497931s May 9 21:15:42.162: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 12.120490656s May 9 21:15:44.166: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 14.124837293s May 9 21:15:46.176: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 16.134948362s May 9 21:15:48.183: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 18.141307455s May 9 21:15:50.187: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 20.145885452s May 9 21:15:52.191: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Running", Reason="", readiness=true. Elapsed: 22.149411508s May 9 21:15:54.195: INFO: Pod "pod-subpath-test-configmap-r5vd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.153974321s STEP: Saw pod success May 9 21:15:54.195: INFO: Pod "pod-subpath-test-configmap-r5vd" satisfied condition "success or failure" May 9 21:15:54.199: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-r5vd container test-container-subpath-configmap-r5vd: STEP: delete the pod May 9 21:15:54.237: INFO: Waiting for pod pod-subpath-test-configmap-r5vd to disappear May 9 21:15:54.257: INFO: Pod pod-subpath-test-configmap-r5vd no longer exists STEP: Deleting pod pod-subpath-test-configmap-r5vd May 9 21:15:54.257: INFO: Deleting pod "pod-subpath-test-configmap-r5vd" in namespace "subpath-9740" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:15:54.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9740" for this suite. • [SLOW TEST:24.352 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":25,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:15:54.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 9 21:15:54.381: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:16:01.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4827" for this suite. • [SLOW TEST:7.035 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":26,"skipped":520,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:16:01.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-8374d93e-da82-44e8-bf68-3009cbfd7d66 STEP: Creating secret with name s-test-opt-upd-a9b6e214-284f-4fa1-abf3-7c18cddb5af9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8374d93e-da82-44e8-bf68-3009cbfd7d66 STEP: Updating secret s-test-opt-upd-a9b6e214-284f-4fa1-abf3-7c18cddb5af9 STEP: Creating secret with name s-test-opt-create-35c05744-2155-4e80-aad8-39ce5a5cbc95 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:17:25.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8405" for this suite. • [SLOW TEST:84.648 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":533,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:17:25.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 9 21:17:26.082: INFO: Waiting up to 5m0s for pod "pod-a150b06a-bf9e-4698-a333-588b6770dc46" in namespace "emptydir-4790" to be "success or failure" May 9 21:17:26.085: INFO: Pod "pod-a150b06a-bf9e-4698-a333-588b6770dc46": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003818ms May 9 21:17:28.091: INFO: Pod "pod-a150b06a-bf9e-4698-a333-588b6770dc46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00892115s May 9 21:17:30.096: INFO: Pod "pod-a150b06a-bf9e-4698-a333-588b6770dc46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013448739s STEP: Saw pod success May 9 21:17:30.096: INFO: Pod "pod-a150b06a-bf9e-4698-a333-588b6770dc46" satisfied condition "success or failure" May 9 21:17:30.099: INFO: Trying to get logs from node jerma-worker2 pod pod-a150b06a-bf9e-4698-a333-588b6770dc46 container test-container: STEP: delete the pod May 9 21:17:30.132: INFO: Waiting for pod pod-a150b06a-bf9e-4698-a333-588b6770dc46 to disappear May 9 21:17:30.137: INFO: Pod pod-a150b06a-bf9e-4698-a333-588b6770dc46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:17:30.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4790" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":536,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:17:30.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 9 21:17:30.212: INFO: >>> kubeConfig: /root/.kube/config May 9 21:17:32.194: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:17:42.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7012" for this suite. • [SLOW TEST:12.545 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":29,"skipped":536,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:17:42.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-3425bf86-fc00-4ad4-93c6-07955e9215bd STEP: Creating a pod to test consume secrets May 9 21:17:42.786: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c" in namespace "projected-5385" to be "success or failure" May 9 21:17:42.806: INFO: Pod "pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.806567ms May 9 21:17:44.838: INFO: Pod "pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052051792s May 9 21:17:46.841: INFO: Pod "pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c": Phase="Running", Reason="", readiness=true. Elapsed: 4.055436452s May 9 21:17:48.846: INFO: Pod "pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059869464s STEP: Saw pod success May 9 21:17:48.846: INFO: Pod "pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c" satisfied condition "success or failure" May 9 21:17:48.849: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c container projected-secret-volume-test: STEP: delete the pod May 9 21:17:48.881: INFO: Waiting for pod pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c to disappear May 9 21:17:48.892: INFO: Pod pod-projected-secrets-d1b8988f-ba57-4ad3-9476-52cf65ff807c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:17:48.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5385" for this suite. • [SLOW TEST:6.181 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":536,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:17:48.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2699 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2699 I0509 21:17:49.036479 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2699, replica count: 2 I0509 21:17:52.086938 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:17:55.087180 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 9 21:17:55.087: INFO: Creating new exec pod May 9 21:18:00.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2699 execpodnm22z -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 9 21:18:00.354: INFO: stderr: "I0509 21:18:00.246157 179 log.go:172] (0xc00097e580) (0xc000b40000) Create stream\nI0509 21:18:00.246222 179 log.go:172] (0xc00097e580) (0xc000b40000) Stream added, broadcasting: 1\nI0509 21:18:00.249694 179 log.go:172] (0xc00097e580) Reply frame received for 1\nI0509 21:18:00.249763 179 log.go:172] (0xc00097e580) (0xc00072b900) Create stream\nI0509 21:18:00.249787 179 log.go:172] (0xc00097e580) (0xc00072b900) Stream added, broadcasting: 3\nI0509 21:18:00.251034 179 log.go:172] (0xc00097e580) Reply frame received for 3\nI0509 21:18:00.251081 179 log.go:172] (0xc00097e580) (0xc000b400a0) Create stream\nI0509 21:18:00.251096 179 log.go:172] (0xc00097e580) (0xc000b400a0) Stream added, broadcasting: 5\nI0509 21:18:00.252319 179 log.go:172] (0xc00097e580) Reply frame received for 5\nI0509 21:18:00.347603 179 log.go:172] (0xc00097e580) Data frame received for 5\nI0509 21:18:00.347641 179 log.go:172] (0xc000b400a0) (5) Data frame handling\nI0509 21:18:00.347682 179 log.go:172] (0xc000b400a0) (5) Data frame sent\nI0509 21:18:00.347702 179 log.go:172] (0xc00097e580) Data frame received for 5\nI0509 21:18:00.347715 179 log.go:172] (0xc000b400a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0509 21:18:00.347771 179 log.go:172] (0xc00097e580) Data frame received for 3\nI0509 21:18:00.347795 179 log.go:172] (0xc00072b900) (3) Data frame handling\nI0509 21:18:00.350187 179 log.go:172] (0xc00097e580) Data frame received for 1\nI0509 21:18:00.350218 179 log.go:172] (0xc000b40000) (1) Data frame handling\nI0509 21:18:00.350239 179 log.go:172] (0xc000b40000) (1) Data frame sent\nI0509 21:18:00.350255 179 log.go:172] (0xc00097e580) (0xc000b40000) Stream removed, broadcasting: 1\nI0509 21:18:00.350268 179 log.go:172] (0xc00097e580) Go away received\nI0509 21:18:00.350687 179 log.go:172] (0xc00097e580) (0xc000b40000) Stream removed, broadcasting: 1\nI0509 21:18:00.350707 179 log.go:172] (0xc00097e580) (0xc00072b900) Stream removed, broadcasting: 3\nI0509 21:18:00.350716 179 log.go:172] (0xc00097e580) (0xc000b400a0) Stream removed, broadcasting: 5\n" May 9 21:18:00.355: INFO: stdout: "" May 9 21:18:00.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2699 execpodnm22z -- /bin/sh -x -c nc -zv -t -w 2 10.106.206.52 80' May 9 21:18:00.555: INFO: stderr: "I0509 21:18:00.490823 199 log.go:172] (0xc000a404d0) (0xc0008de000) Create stream\nI0509 21:18:00.490898 199 log.go:172] (0xc000a404d0) (0xc0008de000) Stream added, broadcasting: 1\nI0509 21:18:00.495315 199 log.go:172] (0xc000a404d0) Reply frame received for 1\nI0509 21:18:00.495345 199 log.go:172] (0xc000a404d0) (0xc0005a8640) Create stream\nI0509 21:18:00.495357 199 log.go:172] (0xc000a404d0) (0xc0005a8640) Stream added, broadcasting: 3\nI0509 21:18:00.496080 199 log.go:172] (0xc000a404d0) Reply frame received for 3\nI0509 21:18:00.496105 199 log.go:172] (0xc000a404d0) (0xc00031f400) Create stream\nI0509 21:18:00.496116 199 log.go:172] (0xc000a404d0) (0xc00031f400) Stream added, broadcasting: 5\nI0509 21:18:00.496954 199 log.go:172] (0xc000a404d0) Reply frame received for 5\nI0509 21:18:00.549583 199 log.go:172] (0xc000a404d0) Data frame received for 5\nI0509 21:18:00.549621 199 log.go:172] (0xc00031f400) (5) Data frame handling\nI0509 21:18:00.549633 199 log.go:172] (0xc00031f400) (5) Data frame sent\nI0509 21:18:00.549641 199 log.go:172] (0xc000a404d0) Data frame received for 5\nI0509 21:18:00.549646 199 log.go:172] (0xc00031f400) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.206.52 80\nConnection to 10.106.206.52 80 port [tcp/http] succeeded!\nI0509 21:18:00.549668 199 log.go:172] (0xc000a404d0) Data frame received for 3\nI0509 21:18:00.549676 199 log.go:172] (0xc0005a8640) (3) Data frame handling\nI0509 21:18:00.551490 199 log.go:172] (0xc000a404d0) Data frame received for 1\nI0509 21:18:00.551506 199 log.go:172] (0xc0008de000) (1) Data frame handling\nI0509 21:18:00.551518 199 log.go:172] (0xc0008de000) (1) Data frame sent\nI0509 21:18:00.551531 199 log.go:172] (0xc000a404d0) (0xc0008de000) Stream removed, broadcasting: 1\nI0509 21:18:00.551584 199 log.go:172] (0xc000a404d0) Go away received\nI0509 21:18:00.551799 199 log.go:172] (0xc000a404d0) (0xc0008de000) Stream removed, broadcasting: 1\nI0509 21:18:00.551817 199 log.go:172] (0xc000a404d0) (0xc0005a8640) Stream removed, broadcasting: 3\nI0509 21:18:00.551825 199 log.go:172] (0xc000a404d0) (0xc00031f400) Stream removed, broadcasting: 5\n" May 9 21:18:00.555: INFO: stdout: "" May 9 21:18:00.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2699 execpodnm22z -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32397' May 9 21:18:00.781: INFO: stderr: "I0509 21:18:00.695868 220 log.go:172] (0xc0009e0630) (0xc0005232c0) Create stream\nI0509 21:18:00.695921 220 log.go:172] (0xc0009e0630) (0xc0005232c0) Stream added, broadcasting: 1\nI0509 21:18:00.698879 220 log.go:172] (0xc0009e0630) Reply frame received for 1\nI0509 21:18:00.698921 220 log.go:172] (0xc0009e0630) (0xc000a52000) Create stream\nI0509 21:18:00.698933 220 log.go:172] (0xc0009e0630) (0xc000a52000) Stream added, broadcasting: 3\nI0509 21:18:00.699904 220 log.go:172] (0xc0009e0630) Reply frame received for 3\nI0509 21:18:00.699947 220 log.go:172] (0xc0009e0630) (0xc0007059a0) Create stream\nI0509 21:18:00.699964 220 log.go:172] (0xc0009e0630) (0xc0007059a0) Stream added, broadcasting: 5\nI0509 21:18:00.700933 220 log.go:172] (0xc0009e0630) Reply frame received for 5\nI0509 21:18:00.774772 220 log.go:172] (0xc0009e0630) Data frame received for 3\nI0509 21:18:00.774812 220 log.go:172] (0xc000a52000) (3) Data frame handling\nI0509 21:18:00.774838 220 log.go:172] (0xc0009e0630) Data frame received for 5\nI0509 21:18:00.774877 220 log.go:172] (0xc0007059a0) (5) Data frame handling\nI0509 21:18:00.774904 220 log.go:172] (0xc0007059a0) (5) Data frame sent\nI0509 21:18:00.774930 220 log.go:172] (0xc0009e0630) Data frame received for 5\nI0509 21:18:00.774957 220 log.go:172] (0xc0007059a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32397\nConnection to 172.17.0.10 32397 port [tcp/32397] succeeded!\nI0509 21:18:00.776548 220 log.go:172] (0xc0009e0630) Data frame received for 1\nI0509 21:18:00.776569 220 log.go:172] (0xc0005232c0) (1) Data frame handling\nI0509 21:18:00.776584 220 log.go:172] (0xc0005232c0) (1) Data frame sent\nI0509 21:18:00.776717 220 log.go:172] (0xc0009e0630) (0xc0005232c0) Stream removed, broadcasting: 1\nI0509 21:18:00.776746 220 log.go:172] (0xc0009e0630) Go away received\nI0509 21:18:00.777404 220 log.go:172] (0xc0009e0630) (0xc0005232c0) Stream removed, broadcasting: 1\nI0509 21:18:00.777429 220 log.go:172] (0xc0009e0630) (0xc000a52000) Stream removed, broadcasting: 3\nI0509 21:18:00.777441 220 log.go:172] (0xc0009e0630) (0xc0007059a0) Stream removed, broadcasting: 5\n" May 9 21:18:00.782: INFO: stdout: "" May 9 21:18:00.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2699 execpodnm22z -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32397' May 9 21:18:01.042: INFO: stderr: "I0509 21:18:00.960557 241 log.go:172] (0xc0000f56b0) (0xc0009fc640) Create stream\nI0509 21:18:00.960615 241 log.go:172] (0xc0000f56b0) (0xc0009fc640) Stream added, broadcasting: 1\nI0509 21:18:00.966608 241 log.go:172] (0xc0000f56b0) Reply frame received for 1\nI0509 21:18:00.966657 241 log.go:172] (0xc0000f56b0) (0xc0006abb80) Create stream\nI0509 21:18:00.966669 241 log.go:172] (0xc0000f56b0) (0xc0006abb80) Stream added, broadcasting: 3\nI0509 21:18:00.967639 241 log.go:172] (0xc0000f56b0) Reply frame received for 3\nI0509 21:18:00.967671 241 log.go:172] (0xc0000f56b0) (0xc000602780) Create stream\nI0509 21:18:00.967681 241 log.go:172] (0xc0000f56b0) (0xc000602780) Stream added, broadcasting: 5\nI0509 21:18:00.968648 241 log.go:172] (0xc0000f56b0) Reply frame received for 5\nI0509 21:18:01.035009 241 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0509 21:18:01.035044 241 log.go:172] (0xc000602780) (5) Data frame handling\nI0509 21:18:01.035072 241 log.go:172] (0xc000602780) (5) Data frame sent\nI0509 21:18:01.035086 241 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0509 21:18:01.035097 241 log.go:172] (0xc000602780) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32397\nConnection to 172.17.0.8 32397 port [tcp/32397] succeeded!\nI0509 21:18:01.035130 241 log.go:172] (0xc000602780) (5) Data frame sent\nI0509 21:18:01.035153 241 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0509 21:18:01.035166 241 log.go:172] (0xc000602780) (5) Data frame handling\nI0509 21:18:01.035794 241 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0509 21:18:01.035823 241 log.go:172] (0xc0006abb80) (3) Data frame handling\nI0509 21:18:01.037327 241 log.go:172] (0xc0000f56b0) Data frame received for 1\nI0509 21:18:01.037348 241 log.go:172] (0xc0009fc640) (1) Data frame handling\nI0509 21:18:01.037365 241 log.go:172] (0xc0009fc640) (1) Data frame sent\nI0509 21:18:01.037378 241 log.go:172] (0xc0000f56b0) (0xc0009fc640) Stream removed, broadcasting: 1\nI0509 21:18:01.037674 241 log.go:172] (0xc0000f56b0) Go away received\nI0509 21:18:01.037770 241 log.go:172] (0xc0000f56b0) (0xc0009fc640) Stream removed, broadcasting: 1\nI0509 21:18:01.037793 241 log.go:172] (0xc0000f56b0) (0xc0006abb80) Stream removed, broadcasting: 3\nI0509 21:18:01.037806 241 log.go:172] (0xc0000f56b0) (0xc000602780) Stream removed, broadcasting: 5\n" May 9 21:18:01.042: INFO: stdout: "" May 9 21:18:01.042: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:18:01.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2699" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.178 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":31,"skipped":543,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:18:01.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 9 21:18:01.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1697' May 9 21:18:01.449: INFO: stderr: "" May 9 21:18:01.449: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 21:18:01.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:01.582: INFO: stderr: "" May 9 21:18:01.582: INFO: stdout: "update-demo-nautilus-xnt9k update-demo-nautilus-z9kpx " May 9 21:18:01.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnt9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:01.673: INFO: stderr: "" May 9 21:18:01.673: INFO: stdout: "" May 9 21:18:01.673: INFO: update-demo-nautilus-xnt9k is created but not running May 9 21:18:06.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:06.768: INFO: stderr: "" May 9 21:18:06.768: INFO: stdout: "update-demo-nautilus-xnt9k update-demo-nautilus-z9kpx " May 9 21:18:06.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnt9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:06.910: INFO: stderr: "" May 9 21:18:06.910: INFO: stdout: "true" May 9 21:18:06.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnt9k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:06.999: INFO: stderr: "" May 9 21:18:06.999: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:18:06.999: INFO: validating pod update-demo-nautilus-xnt9k May 9 21:18:07.024: INFO: got data: { "image": "nautilus.jpg" } May 9 21:18:07.024: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:18:07.024: INFO: update-demo-nautilus-xnt9k is verified up and running May 9 21:18:07.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9kpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:07.118: INFO: stderr: "" May 9 21:18:07.118: INFO: stdout: "true" May 9 21:18:07.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9kpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:07.212: INFO: stderr: "" May 9 21:18:07.212: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:18:07.213: INFO: validating pod update-demo-nautilus-z9kpx May 9 21:18:07.218: INFO: got data: { "image": "nautilus.jpg" } May 9 21:18:07.218: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:18:07.218: INFO: update-demo-nautilus-z9kpx is verified up and running STEP: scaling down the replication controller May 9 21:18:07.220: INFO: scanned /root for discovery docs: May 9 21:18:07.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1697' May 9 21:18:08.401: INFO: stderr: "" May 9 21:18:08.401: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 21:18:08.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:08.555: INFO: stderr: "" May 9 21:18:08.555: INFO: stdout: "update-demo-nautilus-xnt9k update-demo-nautilus-z9kpx " STEP: Replicas for name=update-demo: expected=1 actual=2 May 9 21:18:13.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:13.644: INFO: stderr: "" May 9 21:18:13.644: INFO: stdout: "update-demo-nautilus-xnt9k update-demo-nautilus-z9kpx " STEP: Replicas for name=update-demo: expected=1 actual=2 May 9 21:18:18.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:18.737: INFO: stderr: "" May 9 21:18:18.737: INFO: stdout: "update-demo-nautilus-xnt9k update-demo-nautilus-z9kpx " STEP: Replicas for name=update-demo: expected=1 actual=2 May 9 21:18:23.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:23.848: INFO: stderr: "" May 9 21:18:23.848: INFO: stdout: "update-demo-nautilus-z9kpx " May 9 21:18:23.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9kpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:23.943: INFO: stderr: "" May 9 21:18:23.943: INFO: stdout: "true" May 9 21:18:23.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9kpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:24.031: INFO: stderr: "" May 9 21:18:24.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:18:24.031: INFO: validating pod update-demo-nautilus-z9kpx May 9 21:18:24.048: INFO: got data: { "image": "nautilus.jpg" } May 9 21:18:24.048: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:18:24.048: INFO: update-demo-nautilus-z9kpx is verified up and running STEP: scaling up the replication controller May 9 21:18:24.049: INFO: scanned /root for discovery docs: May 9 21:18:24.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1697' May 9 21:18:25.237: INFO: stderr: "" May 9 21:18:25.237: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 21:18:25.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:25.333: INFO: stderr: "" May 9 21:18:25.333: INFO: stdout: "update-demo-nautilus-mnn25 update-demo-nautilus-z9kpx " May 9 21:18:25.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnn25 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:25.415: INFO: stderr: "" May 9 21:18:25.415: INFO: stdout: "" May 9 21:18:25.415: INFO: update-demo-nautilus-mnn25 is created but not running May 9 21:18:30.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1697' May 9 21:18:30.519: INFO: stderr: "" May 9 21:18:30.519: INFO: stdout: "update-demo-nautilus-mnn25 update-demo-nautilus-z9kpx " May 9 21:18:30.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnn25 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:30.606: INFO: stderr: "" May 9 21:18:30.606: INFO: stdout: "true" May 9 21:18:30.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnn25 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:30.702: INFO: stderr: "" May 9 21:18:30.702: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:18:30.702: INFO: validating pod update-demo-nautilus-mnn25 May 9 21:18:30.706: INFO: got data: { "image": "nautilus.jpg" } May 9 21:18:30.706: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:18:30.706: INFO: update-demo-nautilus-mnn25 is verified up and running May 9 21:18:30.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9kpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:30.805: INFO: stderr: "" May 9 21:18:30.805: INFO: stdout: "true" May 9 21:18:30.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z9kpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1697' May 9 21:18:30.887: INFO: stderr: "" May 9 21:18:30.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:18:30.887: INFO: validating pod update-demo-nautilus-z9kpx May 9 21:18:30.890: INFO: got data: { "image": "nautilus.jpg" } May 9 21:18:30.890: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:18:30.890: INFO: update-demo-nautilus-z9kpx is verified up and running STEP: using delete to clean up resources May 9 21:18:30.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1697' May 9 21:18:30.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:18:30.995: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 9 21:18:30.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1697' May 9 21:18:31.109: INFO: stderr: "No resources found in kubectl-1697 namespace.\n" May 9 21:18:31.110: INFO: stdout: "" May 9 21:18:31.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1697 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 21:18:31.212: INFO: stderr: "" May 9 21:18:31.212: INFO: stdout: "update-demo-nautilus-mnn25\nupdate-demo-nautilus-z9kpx\n" May 9 21:18:31.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1697' May 9 21:18:31.820: INFO: stderr: "No resources found in kubectl-1697 namespace.\n" May 9 21:18:31.820: INFO: stdout: "" May 9 21:18:31.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1697 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 21:18:31.919: INFO: stderr: "" May 9 21:18:31.919: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:18:31.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1697" for this suite. • [SLOW TEST:30.848 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":32,"skipped":544,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:18:31.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 9 21:18:36.777: INFO: Successfully updated pod "annotationupdatecf614035-9fce-4748-b2ed-69946549ef4e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:18:38.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2466" for this suite. • [SLOW TEST:6.894 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":554,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:18:38.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 9 21:18:49.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 21:18:49.014: INFO: Pod pod-with-poststart-exec-hook still exists May 9 21:18:51.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 21:18:51.018: INFO: Pod pod-with-poststart-exec-hook still exists May 9 21:18:53.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 21:18:53.018: INFO: Pod pod-with-poststart-exec-hook still exists May 9 21:18:55.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 21:18:55.018: INFO: Pod pod-with-poststart-exec-hook still exists May 9 21:18:57.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 21:18:57.018: INFO: Pod pod-with-poststart-exec-hook still exists May 9 21:18:59.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 21:18:59.018: INFO: Pod pod-with-poststart-exec-hook still exists May 9 21:19:01.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 21:19:01.018: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:19:01.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3160" for this suite. • [SLOW TEST:22.207 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":566,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:19:01.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-0f993229-2f3a-4f72-94bc-65bde695fabf STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-0f993229-2f3a-4f72-94bc-65bde695fabf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:19:07.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9487" for this suite. • [SLOW TEST:6.184 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":578,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:19:07.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 9 21:19:07.268: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795884 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 21:19:07.268: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795884 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 9 21:19:17.277: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795925 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 9 21:19:17.277: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795925 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 9 21:19:27.286: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795959 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 21:19:27.286: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795959 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 9 21:19:37.292: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795993 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 21:19:37.292: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-a a46a0480-6ef2-4f06-8c0a-d707723c9025 14795993 0 2020-05-09 21:19:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 9 21:19:47.299: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-b 9551330a-0cab-4175-84a8-0fdedef44da8 14796023 0 2020-05-09 21:19:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 21:19:47.299: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-b 9551330a-0cab-4175-84a8-0fdedef44da8 14796023 0 2020-05-09 21:19:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 9 21:19:57.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-b 9551330a-0cab-4175-84a8-0fdedef44da8 14796053 0 2020-05-09 21:19:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 21:19:57.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8603 /api/v1/namespaces/watch-8603/configmaps/e2e-watch-test-configmap-b 9551330a-0cab-4175-84a8-0fdedef44da8 14796053 0 2020-05-09 21:19:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:20:07.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8603" for this suite. • [SLOW TEST:60.104 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":36,"skipped":581,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:20:07.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:20:07.438: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-09904f4f-caa5-4242-816e-07fc15926d59" in namespace "security-context-test-8515" to be "success or failure" May 9 21:20:07.440: INFO: Pod "busybox-privileged-false-09904f4f-caa5-4242-816e-07fc15926d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392974ms May 9 21:20:09.468: INFO: Pod "busybox-privileged-false-09904f4f-caa5-4242-816e-07fc15926d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030438872s May 9 21:20:11.473: INFO: Pod "busybox-privileged-false-09904f4f-caa5-4242-816e-07fc15926d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035402077s May 9 21:20:11.473: INFO: Pod "busybox-privileged-false-09904f4f-caa5-4242-816e-07fc15926d59" satisfied condition "success or failure" May 9 21:20:11.479: INFO: Got logs for pod "busybox-privileged-false-09904f4f-caa5-4242-816e-07fc15926d59": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:20:11.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8515" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":590,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:20:11.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-811745b5-6de7-4259-bbec-b07f713f02a2 in namespace container-probe-8858 May 9 21:20:15.592: INFO: Started pod busybox-811745b5-6de7-4259-bbec-b07f713f02a2 in namespace container-probe-8858 STEP: checking the pod's current state and verifying that restartCount is present May 9 21:20:15.595: INFO: Initial restart count of pod busybox-811745b5-6de7-4259-bbec-b07f713f02a2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:24:16.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8858" for this suite. • [SLOW TEST:244.823 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":602,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:24:16.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:24:16.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534" in namespace "downward-api-163" to be "success or failure" May 9 21:24:16.436: INFO: Pod "downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534": Phase="Pending", Reason="", readiness=false. Elapsed: 54.47259ms May 9 21:24:18.440: INFO: Pod "downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058787862s May 9 21:24:20.444: INFO: Pod "downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062498953s STEP: Saw pod success May 9 21:24:20.444: INFO: Pod "downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534" satisfied condition "success or failure" May 9 21:24:20.446: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534 container client-container: STEP: delete the pod May 9 21:24:20.581: INFO: Waiting for pod downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534 to disappear May 9 21:24:20.588: INFO: Pod downwardapi-volume-a7d552a1-6ba8-42f5-ba2b-daaaa3e90534 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:24:20.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-163" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":609,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:24:20.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 9 21:24:20.711: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5030 /api/v1/namespaces/watch-5030/configmaps/e2e-watch-test-resource-version a033d93b-3ebd-4a84-93f5-63e8834f464e 14796880 0 2020-05-09 21:24:20 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 21:24:20.711: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5030 /api/v1/namespaces/watch-5030/configmaps/e2e-watch-test-resource-version a033d93b-3ebd-4a84-93f5-63e8834f464e 14796881 0 2020-05-09 21:24:20 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:24:20.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5030" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":40,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:24:20.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:24:31.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5967" for this suite. • [SLOW TEST:11.195 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":41,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:24:31.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:24:32.010: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 9 21:24:37.015: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 9 21:24:37.015: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 9 21:24:37.083: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6979 /apis/apps/v1/namespaces/deployment-6979/deployments/test-cleanup-deployment 456f9c6f-04a8-470c-bc5a-0f1e911c4a29 14796982 1 2020-05-09 21:24:37 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f9d7f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 9 21:24:37.111: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-6979 /apis/apps/v1/namespaces/deployment-6979/replicasets/test-cleanup-deployment-55ffc6b7b6 35098bd5-73f9-4eb3-9cdb-d223d88d1238 14796989 1 2020-05-09 21:24:37 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 456f9c6f-04a8-470c-bc5a-0f1e911c4a29 0xc003f9dc37 0xc003f9dc38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f9dca8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 9 21:24:37.111: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 9 21:24:37.111: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6979 /apis/apps/v1/namespaces/deployment-6979/replicasets/test-cleanup-controller dc29f349-09c3-4dbf-8036-0b9a636e2c04 14796984 1 2020-05-09 21:24:31 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 456f9c6f-04a8-470c-bc5a-0f1e911c4a29 0xc003f9db4f 0xc003f9db60}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003f9dbc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 9 21:24:37.135: INFO: Pod "test-cleanup-controller-zpnst" is available: &Pod{ObjectMeta:{test-cleanup-controller-zpnst test-cleanup-controller- deployment-6979 /api/v1/namespaces/deployment-6979/pods/test-cleanup-controller-zpnst 9bf88a16-3d4b-4366-8ef8-38ab3416b6ce 14796976 0 2020-05-09 21:24:32 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller dc29f349-09c3-4dbf-8036-0b9a636e2c04 0xc003f46177 0xc003f46178}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-58mfm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-58mfm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-58mfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.254,StartTime:2020-05-09 21:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:24:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1ea113d0c3635f03ec66ebc2e1b9b148f96a95068fa09d017f7d7056db439600,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.254,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 9 21:24:37.135: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-h7t6s" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-h7t6s test-cleanup-deployment-55ffc6b7b6- deployment-6979 /api/v1/namespaces/deployment-6979/pods/test-cleanup-deployment-55ffc6b7b6-h7t6s 3bfe220d-2878-4509-85a9-4c702d892ac9 14796991 0 2020-05-09 21:24:37 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 35098bd5-73f9-4eb3-9cdb-d223d88d1238 0xc003f46307 0xc003f46308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-58mfm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-58mfm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-58mfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:24:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:24:37.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6979" for this suite. • [SLOW TEST:5.233 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":42,"skipped":660,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:24:37.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 9 21:24:37.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5105' May 9 21:24:37.495: INFO: stderr: "" May 9 21:24:37.495: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 21:24:37.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5105' May 9 21:24:37.590: INFO: stderr: "" May 9 21:24:37.591: INFO: stdout: "update-demo-nautilus-2qxcx update-demo-nautilus-nzs6v " May 9 21:24:37.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qxcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5105' May 9 21:24:37.671: INFO: stderr: "" May 9 21:24:37.671: INFO: stdout: "" May 9 21:24:37.671: INFO: update-demo-nautilus-2qxcx is created but not running May 9 21:24:42.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5105' May 9 21:24:42.778: INFO: stderr: "" May 9 21:24:42.778: INFO: stdout: "update-demo-nautilus-2qxcx update-demo-nautilus-nzs6v " May 9 21:24:42.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qxcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5105' May 9 21:24:42.860: INFO: stderr: "" May 9 21:24:42.860: INFO: stdout: "true" May 9 21:24:42.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qxcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5105' May 9 21:24:42.968: INFO: stderr: "" May 9 21:24:42.968: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:24:42.968: INFO: validating pod update-demo-nautilus-2qxcx May 9 21:24:42.972: INFO: got data: { "image": "nautilus.jpg" } May 9 21:24:42.972: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:24:42.972: INFO: update-demo-nautilus-2qxcx is verified up and running May 9 21:24:42.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nzs6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5105' May 9 21:24:43.059: INFO: stderr: "" May 9 21:24:43.059: INFO: stdout: "true" May 9 21:24:43.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nzs6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5105' May 9 21:24:43.149: INFO: stderr: "" May 9 21:24:43.149: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:24:43.149: INFO: validating pod update-demo-nautilus-nzs6v May 9 21:24:43.153: INFO: got data: { "image": "nautilus.jpg" } May 9 21:24:43.153: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:24:43.153: INFO: update-demo-nautilus-nzs6v is verified up and running STEP: using delete to clean up resources May 9 21:24:43.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5105' May 9 21:24:43.290: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:24:43.290: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 9 21:24:43.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5105' May 9 21:24:43.386: INFO: stderr: "No resources found in kubectl-5105 namespace.\n" May 9 21:24:43.386: INFO: stdout: "" May 9 21:24:43.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5105 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 21:24:43.519: INFO: stderr: "" May 9 21:24:43.519: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:24:43.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5105" for this suite. • [SLOW TEST:6.377 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":43,"skipped":665,"failed":0} SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:24:43.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 9 21:24:55.675: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:55.675: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:55.712855 7 log.go:172] (0xc00265ebb0) (0xc0014ed360) Create stream I0509 21:24:55.712894 7 log.go:172] (0xc00265ebb0) (0xc0014ed360) Stream added, broadcasting: 1 I0509 21:24:55.715206 7 log.go:172] (0xc00265ebb0) Reply frame received for 1 I0509 21:24:55.715247 7 log.go:172] (0xc00265ebb0) (0xc001ac6d20) Create stream I0509 21:24:55.715256 7 log.go:172] (0xc00265ebb0) (0xc001ac6d20) Stream added, broadcasting: 3 I0509 21:24:55.716227 7 log.go:172] (0xc00265ebb0) Reply frame received for 3 I0509 21:24:55.716262 7 log.go:172] (0xc00265ebb0) (0xc001e40fa0) Create stream I0509 21:24:55.716275 7 log.go:172] (0xc00265ebb0) (0xc001e40fa0) Stream added, broadcasting: 5 I0509 21:24:55.717311 7 log.go:172] (0xc00265ebb0) Reply frame received for 5 I0509 21:24:55.789892 7 log.go:172] (0xc00265ebb0) Data frame received for 3 I0509 21:24:55.789934 7 log.go:172] (0xc001ac6d20) (3) Data frame handling I0509 21:24:55.790060 7 log.go:172] (0xc001ac6d20) (3) Data frame sent I0509 21:24:55.790081 7 log.go:172] (0xc00265ebb0) Data frame received for 3 I0509 21:24:55.790100 7 log.go:172] (0xc001ac6d20) (3) Data frame handling I0509 21:24:55.790116 7 log.go:172] (0xc00265ebb0) Data frame received for 5 I0509 21:24:55.790130 7 log.go:172] (0xc001e40fa0) (5) Data frame handling I0509 21:24:55.791264 7 log.go:172] (0xc00265ebb0) Data frame received for 1 I0509 21:24:55.791284 7 log.go:172] (0xc0014ed360) (1) Data frame handling I0509 21:24:55.791297 7 log.go:172] (0xc0014ed360) (1) Data frame sent I0509 21:24:55.791394 7 log.go:172] (0xc00265ebb0) (0xc0014ed360) Stream removed, broadcasting: 1 I0509 21:24:55.791423 7 log.go:172] (0xc00265ebb0) Go away received I0509 21:24:55.791478 7 log.go:172] (0xc00265ebb0) (0xc0014ed360) Stream removed, broadcasting: 1 I0509 21:24:55.791505 7 log.go:172] (0xc00265ebb0) (0xc001ac6d20) Stream removed, broadcasting: 3 I0509 21:24:55.791525 7 log.go:172] (0xc00265ebb0) (0xc001e40fa0) Stream removed, broadcasting: 5 May 9 21:24:55.791: INFO: Exec stderr: "" May 9 21:24:55.791: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:55.791: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:55.827073 7 log.go:172] (0xc00265f1e0) (0xc0014ed5e0) Create stream I0509 21:24:55.827103 7 log.go:172] (0xc00265f1e0) (0xc0014ed5e0) Stream added, broadcasting: 1 I0509 21:24:55.829329 7 log.go:172] (0xc00265f1e0) Reply frame received for 1 I0509 21:24:55.829360 7 log.go:172] (0xc00265f1e0) (0xc001e41040) Create stream I0509 21:24:55.829387 7 log.go:172] (0xc00265f1e0) (0xc001e41040) Stream added, broadcasting: 3 I0509 21:24:55.830373 7 log.go:172] (0xc00265f1e0) Reply frame received for 3 I0509 21:24:55.830407 7 log.go:172] (0xc00265f1e0) (0xc0028fe780) Create stream I0509 21:24:55.830419 7 log.go:172] (0xc00265f1e0) (0xc0028fe780) Stream added, broadcasting: 5 I0509 21:24:55.831602 7 log.go:172] (0xc00265f1e0) Reply frame received for 5 I0509 21:24:55.908035 7 log.go:172] (0xc00265f1e0) Data frame received for 5 I0509 21:24:55.908065 7 log.go:172] (0xc0028fe780) (5) Data frame handling I0509 21:24:55.908097 7 log.go:172] (0xc00265f1e0) Data frame received for 3 I0509 21:24:55.908120 7 log.go:172] (0xc001e41040) (3) Data frame handling I0509 21:24:55.908152 7 log.go:172] (0xc001e41040) (3) Data frame sent I0509 21:24:55.908164 7 log.go:172] (0xc00265f1e0) Data frame received for 3 I0509 21:24:55.908169 7 log.go:172] (0xc001e41040) (3) Data frame handling I0509 21:24:55.910395 7 log.go:172] (0xc00265f1e0) Data frame received for 1 I0509 21:24:55.910415 7 log.go:172] (0xc0014ed5e0) (1) Data frame handling I0509 21:24:55.910424 7 log.go:172] (0xc0014ed5e0) (1) Data frame sent I0509 21:24:55.910437 7 log.go:172] (0xc00265f1e0) (0xc0014ed5e0) Stream removed, broadcasting: 1 I0509 21:24:55.910450 7 log.go:172] (0xc00265f1e0) Go away received I0509 21:24:55.910573 7 log.go:172] (0xc00265f1e0) (0xc0014ed5e0) Stream removed, broadcasting: 1 I0509 21:24:55.910591 7 log.go:172] (0xc00265f1e0) (0xc001e41040) Stream removed, broadcasting: 3 I0509 21:24:55.910610 7 log.go:172] (0xc00265f1e0) (0xc0028fe780) Stream removed, broadcasting: 5 May 9 21:24:55.910: INFO: Exec stderr: "" May 9 21:24:55.910: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:55.910: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:55.934279 7 log.go:172] (0xc0007a0b00) (0xc001ac70e0) Create stream I0509 21:24:55.934311 7 log.go:172] (0xc0007a0b00) (0xc001ac70e0) Stream added, broadcasting: 1 I0509 21:24:55.936063 7 log.go:172] (0xc0007a0b00) Reply frame received for 1 I0509 21:24:55.936099 7 log.go:172] (0xc0007a0b00) (0xc001ac7180) Create stream I0509 21:24:55.936116 7 log.go:172] (0xc0007a0b00) (0xc001ac7180) Stream added, broadcasting: 3 I0509 21:24:55.936962 7 log.go:172] (0xc0007a0b00) Reply frame received for 3 I0509 21:24:55.936999 7 log.go:172] (0xc0007a0b00) (0xc001ac7220) Create stream I0509 21:24:55.937015 7 log.go:172] (0xc0007a0b00) (0xc001ac7220) Stream added, broadcasting: 5 I0509 21:24:55.938037 7 log.go:172] (0xc0007a0b00) Reply frame received for 5 I0509 21:24:56.006612 7 log.go:172] (0xc0007a0b00) Data frame received for 5 I0509 21:24:56.006646 7 log.go:172] (0xc001ac7220) (5) Data frame handling I0509 21:24:56.006667 7 log.go:172] (0xc0007a0b00) Data frame received for 3 I0509 21:24:56.006676 7 log.go:172] (0xc001ac7180) (3) Data frame handling I0509 21:24:56.006685 7 log.go:172] (0xc001ac7180) (3) Data frame sent I0509 21:24:56.006694 7 log.go:172] (0xc0007a0b00) Data frame received for 3 I0509 21:24:56.006702 7 log.go:172] (0xc001ac7180) (3) Data frame handling I0509 21:24:56.008146 7 log.go:172] (0xc0007a0b00) Data frame received for 1 I0509 21:24:56.008179 7 log.go:172] (0xc001ac70e0) (1) Data frame handling I0509 21:24:56.008200 7 log.go:172] (0xc001ac70e0) (1) Data frame sent I0509 21:24:56.008214 7 log.go:172] (0xc0007a0b00) (0xc001ac70e0) Stream removed, broadcasting: 1 I0509 21:24:56.008278 7 log.go:172] (0xc0007a0b00) Go away received I0509 21:24:56.008312 7 log.go:172] (0xc0007a0b00) (0xc001ac70e0) Stream removed, broadcasting: 1 I0509 21:24:56.008333 7 log.go:172] (0xc0007a0b00) (0xc001ac7180) Stream removed, broadcasting: 3 I0509 21:24:56.008344 7 log.go:172] (0xc0007a0b00) (0xc001ac7220) Stream removed, broadcasting: 5 May 9 21:24:56.008: INFO: Exec stderr: "" May 9 21:24:56.008: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:56.008: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:56.038480 7 log.go:172] (0xc00265f970) (0xc0014ed860) Create stream I0509 21:24:56.038511 7 log.go:172] (0xc00265f970) (0xc0014ed860) Stream added, broadcasting: 1 I0509 21:24:56.040364 7 log.go:172] (0xc00265f970) Reply frame received for 1 I0509 21:24:56.040393 7 log.go:172] (0xc00265f970) (0xc001ac7400) Create stream I0509 21:24:56.040406 7 log.go:172] (0xc00265f970) (0xc001ac7400) Stream added, broadcasting: 3 I0509 21:24:56.041539 7 log.go:172] (0xc00265f970) Reply frame received for 3 I0509 21:24:56.041558 7 log.go:172] (0xc00265f970) (0xc0028fe820) Create stream I0509 21:24:56.041565 7 log.go:172] (0xc00265f970) (0xc0028fe820) Stream added, broadcasting: 5 I0509 21:24:56.042451 7 log.go:172] (0xc00265f970) Reply frame received for 5 I0509 21:24:56.101966 7 log.go:172] (0xc00265f970) Data frame received for 5 I0509 21:24:56.102012 7 log.go:172] (0xc0028fe820) (5) Data frame handling I0509 21:24:56.102049 7 log.go:172] (0xc00265f970) Data frame received for 3 I0509 21:24:56.102062 7 log.go:172] (0xc001ac7400) (3) Data frame handling I0509 21:24:56.102074 7 log.go:172] (0xc001ac7400) (3) Data frame sent I0509 21:24:56.102088 7 log.go:172] (0xc00265f970) Data frame received for 3 I0509 21:24:56.102099 7 log.go:172] (0xc001ac7400) (3) Data frame handling I0509 21:24:56.103604 7 log.go:172] (0xc00265f970) Data frame received for 1 I0509 21:24:56.103629 7 log.go:172] (0xc0014ed860) (1) Data frame handling I0509 21:24:56.103647 7 log.go:172] (0xc0014ed860) (1) Data frame sent I0509 21:24:56.103663 7 log.go:172] (0xc00265f970) (0xc0014ed860) Stream removed, broadcasting: 1 I0509 21:24:56.103683 7 log.go:172] (0xc00265f970) Go away received I0509 21:24:56.103756 7 log.go:172] (0xc00265f970) (0xc0014ed860) Stream removed, broadcasting: 1 I0509 21:24:56.103789 7 log.go:172] (0xc00265f970) (0xc001ac7400) Stream removed, broadcasting: 3 I0509 21:24:56.103815 7 log.go:172] (0xc00265f970) (0xc0028fe820) Stream removed, broadcasting: 5 May 9 21:24:56.103: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 9 21:24:56.103: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:56.103: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:56.136035 7 log.go:172] (0xc002a07600) (0xc0028feb40) Create stream I0509 21:24:56.136058 7 log.go:172] (0xc002a07600) (0xc0028feb40) Stream added, broadcasting: 1 I0509 21:24:56.138865 7 log.go:172] (0xc002a07600) Reply frame received for 1 I0509 21:24:56.138903 7 log.go:172] (0xc002a07600) (0xc0028febe0) Create stream I0509 21:24:56.138917 7 log.go:172] (0xc002a07600) (0xc0028febe0) Stream added, broadcasting: 3 I0509 21:24:56.140043 7 log.go:172] (0xc002a07600) Reply frame received for 3 I0509 21:24:56.140077 7 log.go:172] (0xc002a07600) (0xc0014ed900) Create stream I0509 21:24:56.140103 7 log.go:172] (0xc002a07600) (0xc0014ed900) Stream added, broadcasting: 5 I0509 21:24:56.141023 7 log.go:172] (0xc002a07600) Reply frame received for 5 I0509 21:24:56.213576 7 log.go:172] (0xc002a07600) Data frame received for 5 I0509 21:24:56.213600 7 log.go:172] (0xc0014ed900) (5) Data frame handling I0509 21:24:56.213632 7 log.go:172] (0xc002a07600) Data frame received for 3 I0509 21:24:56.213677 7 log.go:172] (0xc0028febe0) (3) Data frame handling I0509 21:24:56.213697 7 log.go:172] (0xc0028febe0) (3) Data frame sent I0509 21:24:56.213715 7 log.go:172] (0xc002a07600) Data frame received for 3 I0509 21:24:56.213725 7 log.go:172] (0xc0028febe0) (3) Data frame handling I0509 21:24:56.214700 7 log.go:172] (0xc002a07600) Data frame received for 1 I0509 21:24:56.214727 7 log.go:172] (0xc0028feb40) (1) Data frame handling I0509 21:24:56.214759 7 log.go:172] (0xc0028feb40) (1) Data frame sent I0509 21:24:56.214781 7 log.go:172] (0xc002a07600) (0xc0028feb40) Stream removed, broadcasting: 1 I0509 21:24:56.214805 7 log.go:172] (0xc002a07600) Go away received I0509 21:24:56.214904 7 log.go:172] (0xc002a07600) (0xc0028feb40) Stream removed, broadcasting: 1 I0509 21:24:56.214921 7 log.go:172] (0xc002a07600) (0xc0028febe0) Stream removed, broadcasting: 3 I0509 21:24:56.214928 7 log.go:172] (0xc002a07600) (0xc0014ed900) Stream removed, broadcasting: 5 May 9 21:24:56.214: INFO: Exec stderr: "" May 9 21:24:56.214: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:56.214: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:56.247714 7 log.go:172] (0xc0007a1130) (0xc001ac7720) Create stream I0509 21:24:56.247739 7 log.go:172] (0xc0007a1130) (0xc001ac7720) Stream added, broadcasting: 1 I0509 21:24:56.250343 7 log.go:172] (0xc0007a1130) Reply frame received for 1 I0509 21:24:56.250397 7 log.go:172] (0xc0007a1130) (0xc0016d0280) Create stream I0509 21:24:56.250414 7 log.go:172] (0xc0007a1130) (0xc0016d0280) Stream added, broadcasting: 3 I0509 21:24:56.251621 7 log.go:172] (0xc0007a1130) Reply frame received for 3 I0509 21:24:56.251680 7 log.go:172] (0xc0007a1130) (0xc001ac7860) Create stream I0509 21:24:56.251708 7 log.go:172] (0xc0007a1130) (0xc001ac7860) Stream added, broadcasting: 5 I0509 21:24:56.252723 7 log.go:172] (0xc0007a1130) Reply frame received for 5 I0509 21:24:56.303221 7 log.go:172] (0xc0007a1130) Data frame received for 3 I0509 21:24:56.303254 7 log.go:172] (0xc0016d0280) (3) Data frame handling I0509 21:24:56.303263 7 log.go:172] (0xc0016d0280) (3) Data frame sent I0509 21:24:56.303268 7 log.go:172] (0xc0007a1130) Data frame received for 3 I0509 21:24:56.303276 7 log.go:172] (0xc0016d0280) (3) Data frame handling I0509 21:24:56.303310 7 log.go:172] (0xc0007a1130) Data frame received for 5 I0509 21:24:56.303340 7 log.go:172] (0xc001ac7860) (5) Data frame handling I0509 21:24:56.304984 7 log.go:172] (0xc0007a1130) Data frame received for 1 I0509 21:24:56.305007 7 log.go:172] (0xc001ac7720) (1) Data frame handling I0509 21:24:56.305017 7 log.go:172] (0xc001ac7720) (1) Data frame sent I0509 21:24:56.305034 7 log.go:172] (0xc0007a1130) (0xc001ac7720) Stream removed, broadcasting: 1 I0509 21:24:56.305050 7 log.go:172] (0xc0007a1130) Go away received I0509 21:24:56.305373 7 log.go:172] (0xc0007a1130) (0xc001ac7720) Stream removed, broadcasting: 1 I0509 21:24:56.305408 7 log.go:172] (0xc0007a1130) (0xc0016d0280) Stream removed, broadcasting: 3 I0509 21:24:56.305420 7 log.go:172] (0xc0007a1130) (0xc001ac7860) Stream removed, broadcasting: 5 May 9 21:24:56.305: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 9 21:24:56.305: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:56.305: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:56.408053 7 log.go:172] (0xc000ae8790) (0xc001e412c0) Create stream I0509 21:24:56.408099 7 log.go:172] (0xc000ae8790) (0xc001e412c0) Stream added, broadcasting: 1 I0509 21:24:56.410191 7 log.go:172] (0xc000ae8790) Reply frame received for 1 I0509 21:24:56.410261 7 log.go:172] (0xc000ae8790) (0xc0028fec80) Create stream I0509 21:24:56.410285 7 log.go:172] (0xc000ae8790) (0xc0028fec80) Stream added, broadcasting: 3 I0509 21:24:56.411287 7 log.go:172] (0xc000ae8790) Reply frame received for 3 I0509 21:24:56.411344 7 log.go:172] (0xc000ae8790) (0xc0016d0320) Create stream I0509 21:24:56.411362 7 log.go:172] (0xc000ae8790) (0xc0016d0320) Stream added, broadcasting: 5 I0509 21:24:56.412367 7 log.go:172] (0xc000ae8790) Reply frame received for 5 I0509 21:24:56.479859 7 log.go:172] (0xc000ae8790) Data frame received for 5 I0509 21:24:56.479929 7 log.go:172] (0xc0016d0320) (5) Data frame handling I0509 21:24:56.479966 7 log.go:172] (0xc000ae8790) Data frame received for 3 I0509 21:24:56.479981 7 log.go:172] (0xc0028fec80) (3) Data frame handling I0509 21:24:56.480014 7 log.go:172] (0xc0028fec80) (3) Data frame sent I0509 21:24:56.480032 7 log.go:172] (0xc000ae8790) Data frame received for 3 I0509 21:24:56.480045 7 log.go:172] (0xc0028fec80) (3) Data frame handling I0509 21:24:56.482286 7 log.go:172] (0xc000ae8790) Data frame received for 1 I0509 21:24:56.482324 7 log.go:172] (0xc001e412c0) (1) Data frame handling I0509 21:24:56.482358 7 log.go:172] (0xc001e412c0) (1) Data frame sent I0509 21:24:56.482392 7 log.go:172] (0xc000ae8790) (0xc001e412c0) Stream removed, broadcasting: 1 I0509 21:24:56.482419 7 log.go:172] (0xc000ae8790) Go away received I0509 21:24:56.482549 7 log.go:172] (0xc000ae8790) (0xc001e412c0) Stream removed, broadcasting: 1 I0509 21:24:56.482614 7 log.go:172] (0xc000ae8790) (0xc0028fec80) Stream removed, broadcasting: 3 I0509 21:24:56.482642 7 log.go:172] (0xc000ae8790) (0xc0016d0320) Stream removed, broadcasting: 5 May 9 21:24:56.482: INFO: Exec stderr: "" May 9 21:24:56.482: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:56.482: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:56.525268 7 log.go:172] (0xc0007a1760) (0xc001ac7ae0) Create stream I0509 21:24:56.525300 7 log.go:172] (0xc0007a1760) (0xc001ac7ae0) Stream added, broadcasting: 1 I0509 21:24:56.531777 7 log.go:172] (0xc0007a1760) Reply frame received for 1 I0509 21:24:56.531833 7 log.go:172] (0xc0007a1760) (0xc0014edb80) Create stream I0509 21:24:56.531863 7 log.go:172] (0xc0007a1760) (0xc0014edb80) Stream added, broadcasting: 3 I0509 21:24:56.538008 7 log.go:172] (0xc0007a1760) Reply frame received for 3 I0509 21:24:56.538063 7 log.go:172] (0xc0007a1760) (0xc0028fed20) Create stream I0509 21:24:56.538091 7 log.go:172] (0xc0007a1760) (0xc0028fed20) Stream added, broadcasting: 5 I0509 21:24:56.539860 7 log.go:172] (0xc0007a1760) Reply frame received for 5 I0509 21:24:56.589425 7 log.go:172] (0xc0007a1760) Data frame received for 3 I0509 21:24:56.589466 7 log.go:172] (0xc0007a1760) Data frame received for 5 I0509 21:24:56.589496 7 log.go:172] (0xc0028fed20) (5) Data frame handling I0509 21:24:56.589513 7 log.go:172] (0xc0014edb80) (3) Data frame handling I0509 21:24:56.589525 7 log.go:172] (0xc0014edb80) (3) Data frame sent I0509 21:24:56.589541 7 log.go:172] (0xc0007a1760) Data frame received for 3 I0509 21:24:56.589549 7 log.go:172] (0xc0014edb80) (3) Data frame handling I0509 21:24:56.590427 7 log.go:172] (0xc0007a1760) Data frame received for 1 I0509 21:24:56.590442 7 log.go:172] (0xc001ac7ae0) (1) Data frame handling I0509 21:24:56.590451 7 log.go:172] (0xc001ac7ae0) (1) Data frame sent I0509 21:24:56.590462 7 log.go:172] (0xc0007a1760) (0xc001ac7ae0) Stream removed, broadcasting: 1 I0509 21:24:56.590494 7 log.go:172] (0xc0007a1760) Go away received I0509 21:24:56.590532 7 log.go:172] (0xc0007a1760) (0xc001ac7ae0) Stream removed, broadcasting: 1 I0509 21:24:56.590545 7 log.go:172] (0xc0007a1760) (0xc0014edb80) Stream removed, broadcasting: 3 I0509 21:24:56.590550 7 log.go:172] (0xc0007a1760) (0xc0028fed20) Stream removed, broadcasting: 5 May 9 21:24:56.590: INFO: Exec stderr: "" May 9 21:24:56.590: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:56.590: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:56.614329 7 log.go:172] (0xc0007a1d90) (0xc001ac7d60) Create stream I0509 21:24:56.614351 7 log.go:172] (0xc0007a1d90) (0xc001ac7d60) Stream added, broadcasting: 1 I0509 21:24:56.615839 7 log.go:172] (0xc0007a1d90) Reply frame received for 1 I0509 21:24:56.615874 7 log.go:172] (0xc0007a1d90) (0xc0028fedc0) Create stream I0509 21:24:56.615882 7 log.go:172] (0xc0007a1d90) (0xc0028fedc0) Stream added, broadcasting: 3 I0509 21:24:56.616759 7 log.go:172] (0xc0007a1d90) Reply frame received for 3 I0509 21:24:56.616788 7 log.go:172] (0xc0007a1d90) (0xc0014edd60) Create stream I0509 21:24:56.616798 7 log.go:172] (0xc0007a1d90) (0xc0014edd60) Stream added, broadcasting: 5 I0509 21:24:56.617733 7 log.go:172] (0xc0007a1d90) Reply frame received for 5 I0509 21:24:56.686743 7 log.go:172] (0xc0007a1d90) Data frame received for 5 I0509 21:24:56.686796 7 log.go:172] (0xc0014edd60) (5) Data frame handling I0509 21:24:56.686824 7 log.go:172] (0xc0007a1d90) Data frame received for 3 I0509 21:24:56.686846 7 log.go:172] (0xc0028fedc0) (3) Data frame handling I0509 21:24:56.686866 7 log.go:172] (0xc0028fedc0) (3) Data frame sent I0509 21:24:56.686891 7 log.go:172] (0xc0007a1d90) Data frame received for 3 I0509 21:24:56.686905 7 log.go:172] (0xc0028fedc0) (3) Data frame handling I0509 21:24:56.688480 7 log.go:172] (0xc0007a1d90) Data frame received for 1 I0509 21:24:56.688522 7 log.go:172] (0xc001ac7d60) (1) Data frame handling I0509 21:24:56.688552 7 log.go:172] (0xc001ac7d60) (1) Data frame sent I0509 21:24:56.688589 7 log.go:172] (0xc0007a1d90) (0xc001ac7d60) Stream removed, broadcasting: 1 I0509 21:24:56.688621 7 log.go:172] (0xc0007a1d90) Go away received I0509 21:24:56.688735 7 log.go:172] (0xc0007a1d90) (0xc001ac7d60) Stream removed, broadcasting: 1 I0509 21:24:56.688759 7 log.go:172] (0xc0007a1d90) (0xc0028fedc0) Stream removed, broadcasting: 3 I0509 21:24:56.688787 7 log.go:172] (0xc0007a1d90) (0xc0014edd60) Stream removed, broadcasting: 5 May 9 21:24:56.688: INFO: Exec stderr: "" May 9 21:24:56.688: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6774 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:24:56.688: INFO: >>> kubeConfig: /root/.kube/config I0509 21:24:56.726084 7 log.go:172] (0xc000ae8dc0) (0xc001e41540) Create stream I0509 21:24:56.726112 7 log.go:172] (0xc000ae8dc0) (0xc001e41540) Stream added, broadcasting: 1 I0509 21:24:56.727941 7 log.go:172] (0xc000ae8dc0) Reply frame received for 1 I0509 21:24:56.727992 7 log.go:172] (0xc000ae8dc0) (0xc001e41680) Create stream I0509 21:24:56.728012 7 log.go:172] (0xc000ae8dc0) (0xc001e41680) Stream added, broadcasting: 3 I0509 21:24:56.728974 7 log.go:172] (0xc000ae8dc0) Reply frame received for 3 I0509 21:24:56.729024 7 log.go:172] (0xc000ae8dc0) (0xc0014edea0) Create stream I0509 21:24:56.729043 7 log.go:172] (0xc000ae8dc0) (0xc0014edea0) Stream added, broadcasting: 5 I0509 21:24:56.730159 7 log.go:172] (0xc000ae8dc0) Reply frame received for 5 I0509 21:24:56.793603 7 log.go:172] (0xc000ae8dc0) Data frame received for 5 I0509 21:24:56.793629 7 log.go:172] (0xc0014edea0) (5) Data frame handling I0509 21:24:56.793664 7 log.go:172] (0xc000ae8dc0) Data frame received for 3 I0509 21:24:56.793679 7 log.go:172] (0xc001e41680) (3) Data frame handling I0509 21:24:56.793698 7 log.go:172] (0xc001e41680) (3) Data frame sent I0509 21:24:56.793722 7 log.go:172] (0xc000ae8dc0) Data frame received for 3 I0509 21:24:56.793734 7 log.go:172] (0xc001e41680) (3) Data frame handling I0509 21:24:56.795834 7 log.go:172] (0xc000ae8dc0) Data frame received for 1 I0509 21:24:56.795868 7 log.go:172] (0xc001e41540) (1) Data frame handling I0509 21:24:56.795891 7 log.go:172] (0xc001e41540) (1) Data frame sent I0509 21:24:56.795912 7 log.go:172] (0xc000ae8dc0) (0xc001e41540) Stream removed, broadcasting: 1 I0509 21:24:56.795932 7 log.go:172] (0xc000ae8dc0) Go away received I0509 21:24:56.796060 7 log.go:172] (0xc000ae8dc0) (0xc001e41540) Stream removed, broadcasting: 1 I0509 21:24:56.796084 7 log.go:172] (0xc000ae8dc0) (0xc001e41680) Stream removed, broadcasting: 3 I0509 21:24:56.796096 7 log.go:172] (0xc000ae8dc0) (0xc0014edea0) Stream removed, broadcasting: 5 May 9 21:24:56.796: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:24:56.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6774" for this suite. • [SLOW TEST:13.278 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":672,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:24:56.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 9 21:24:56.922: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 9 21:25:07.359: INFO: >>> kubeConfig: /root/.kube/config May 9 21:25:10.243: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:25:19.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3659" for this suite. • [SLOW TEST:22.921 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":45,"skipped":681,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:25:19.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-m7w2d in namespace proxy-4247 I0509 21:25:19.814022 7 runners.go:189] Created replication controller with name: proxy-service-m7w2d, namespace: proxy-4247, replica count: 1 I0509 21:25:20.864411 7 runners.go:189] proxy-service-m7w2d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:25:21.864629 7 runners.go:189] proxy-service-m7w2d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:25:22.864839 7 runners.go:189] proxy-service-m7w2d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:25:23.865046 7 runners.go:189] proxy-service-m7w2d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:25:24.865382 7 runners.go:189] proxy-service-m7w2d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 21:25:25.865605 7 runners.go:189] proxy-service-m7w2d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 21:25:26.865846 7 runners.go:189] proxy-service-m7w2d Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 9 21:25:26.869: INFO: setup took 7.102862044s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 9 21:25:26.875: INFO: (0) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 5.145572ms) May 9 21:25:26.879: INFO: (0) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 8.952669ms) May 9 21:25:26.879: INFO: (0) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 9.213031ms) May 9 21:25:26.879: INFO: (0) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 9.399885ms) May 9 21:25:26.881: INFO: (0) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 11.389239ms) May 9 21:25:26.883: INFO: (0) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 13.342224ms) May 9 21:25:26.884: INFO: (0) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 14.545684ms) May 9 21:25:26.884: INFO: (0) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 14.538333ms) May 9 21:25:26.884: INFO: (0) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 14.696755ms) May 9 21:25:26.884: INFO: (0) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 14.746601ms) May 9 21:25:26.884: INFO: (0) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 14.740699ms) May 9 21:25:26.887: INFO: (0) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 17.008884ms) May 9 21:25:26.887: INFO: (0) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 17.45641ms) May 9 21:25:26.887: INFO: (0) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 17.379341ms) May 9 21:25:26.887: INFO: (0) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 17.650579ms) May 9 21:25:26.890: INFO: (0) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test (200; 3.684104ms) May 9 21:25:26.894: INFO: (1) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.492284ms) May 9 21:25:26.894: INFO: (1) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 4.646451ms) May 9 21:25:26.894: INFO: (1) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 4.248189ms) May 9 21:25:26.895: INFO: (1) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 4.534076ms) May 9 21:25:26.895: INFO: (1) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.991277ms) May 9 21:25:26.895: INFO: (1) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.702108ms) May 9 21:25:26.895: INFO: (1) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 4.87753ms) May 9 21:25:26.895: INFO: (1) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.829172ms) May 9 21:25:26.895: INFO: (1) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test<... (200; 5.127631ms) May 9 21:25:26.903: INFO: (2) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 5.317234ms) May 9 21:25:26.908: INFO: (2) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 10.377151ms) May 9 21:25:26.908: INFO: (2) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 10.431569ms) May 9 21:25:26.908: INFO: (2) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 10.564714ms) May 9 21:25:26.909: INFO: (2) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 11.02948ms) May 9 21:25:26.909: INFO: (2) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 11.002396ms) May 9 21:25:26.909: INFO: (2) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 11.040683ms) May 9 21:25:26.909: INFO: (2) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test (200; 11.145783ms) May 9 21:25:26.909: INFO: (2) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 11.3288ms) May 9 21:25:26.909: INFO: (2) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 11.683239ms) May 9 21:25:26.910: INFO: (2) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 11.917781ms) May 9 21:25:26.910: INFO: (2) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 12.44855ms) May 9 21:25:26.910: INFO: (2) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 12.525761ms) May 9 21:25:26.914: INFO: (3) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 3.663362ms) May 9 21:25:26.914: INFO: (3) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 3.75103ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 4.064059ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 4.06344ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.098329ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 4.124103ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.234ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.386525ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.509501ms) May 9 21:25:26.915: INFO: (3) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 4.951692ms) May 9 21:25:26.923: INFO: (4) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 5.363705ms) May 9 21:25:26.923: INFO: (4) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test (200; 5.377354ms) May 9 21:25:26.923: INFO: (4) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 5.344495ms) May 9 21:25:26.923: INFO: (4) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 5.466673ms) May 9 21:25:26.925: INFO: (4) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 7.973014ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 3.445292ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.382762ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 3.396219ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.502203ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.762783ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.698705ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 3.708805ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.058314ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 4.049603ms) May 9 21:25:26.929: INFO: (5) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 4.074344ms) May 9 21:25:26.930: INFO: (5) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 4.150006ms) May 9 21:25:26.930: INFO: (5) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 4.426963ms) May 9 21:25:26.930: INFO: (5) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 4.482715ms) May 9 21:25:26.930: INFO: (5) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 4.500219ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 3.388389ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 3.389454ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.516229ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.47282ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 3.463668ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.467305ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.49171ms) May 9 21:25:26.933: INFO: (6) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.449776ms) May 9 21:25:26.934: INFO: (6) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 4.038767ms) May 9 21:25:26.934: INFO: (6) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 4.328077ms) May 9 21:25:26.934: INFO: (6) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 4.408473ms) May 9 21:25:26.934: INFO: (6) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 4.471096ms) May 9 21:25:26.935: INFO: (6) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 4.590074ms) May 9 21:25:26.935: INFO: (6) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 4.551566ms) May 9 21:25:26.937: INFO: (7) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 2.379385ms) May 9 21:25:26.937: INFO: (7) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 2.857089ms) May 9 21:25:26.938: INFO: (7) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 3.284101ms) May 9 21:25:26.938: INFO: (7) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 3.777429ms) May 9 21:25:26.938: INFO: (7) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.830533ms) May 9 21:25:26.938: INFO: (7) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 2.927517ms) May 9 21:25:26.968: INFO: (8) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 2.91171ms) May 9 21:25:26.969: INFO: (8) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 3.373905ms) May 9 21:25:26.972: INFO: (8) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 6.584791ms) May 9 21:25:26.972: INFO: (8) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 6.72385ms) May 9 21:25:26.973: INFO: (8) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test (200; 3.368741ms) May 9 21:25:26.982: INFO: (9) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.937075ms) May 9 21:25:26.982: INFO: (9) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 3.979677ms) May 9 21:25:26.982: INFO: (9) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 4.000921ms) May 9 21:25:26.982: INFO: (9) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.967149ms) May 9 21:25:26.982: INFO: (9) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.103676ms) May 9 21:25:26.982: INFO: (9) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.039317ms) May 9 21:25:26.982: INFO: (9) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 4.974458ms) May 9 21:25:26.984: INFO: (9) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 5.775614ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 3.784485ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 3.77219ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.740627ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.821514ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 3.819263ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.800479ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.832977ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 3.792112ms) May 9 21:25:26.988: INFO: (10) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test (200; 3.663289ms) May 9 21:25:26.995: INFO: (11) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 3.630624ms) May 9 21:25:26.995: INFO: (11) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 3.67832ms) May 9 21:25:26.995: INFO: (11) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.670979ms) May 9 21:25:26.995: INFO: (11) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 3.699226ms) May 9 21:25:26.995: INFO: (11) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.747343ms) May 9 21:25:26.996: INFO: (11) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test<... (200; 3.554704ms) May 9 21:25:27.000: INFO: (12) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 3.561427ms) May 9 21:25:27.002: INFO: (12) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 5.3879ms) May 9 21:25:27.002: INFO: (12) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 5.439183ms) May 9 21:25:27.002: INFO: (12) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 5.383373ms) May 9 21:25:27.003: INFO: (12) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 5.846613ms) May 9 21:25:27.003: INFO: (12) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 6.409028ms) May 9 21:25:27.003: INFO: (12) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 6.450062ms) May 9 21:25:27.003: INFO: (12) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 6.680641ms) May 9 21:25:27.003: INFO: (12) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 6.795869ms) May 9 21:25:27.003: INFO: (12) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 6.668283ms) May 9 21:25:27.004: INFO: (12) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 6.748621ms) May 9 21:25:27.008: INFO: (13) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.798499ms) May 9 21:25:27.008: INFO: (13) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.836116ms) May 9 21:25:27.008: INFO: (13) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.911247ms) May 9 21:25:27.008: INFO: (13) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 4.767932ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 4.895761ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 5.005615ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 5.025617ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 5.136837ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 5.187933ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 5.363788ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 5.370099ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 5.518506ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 5.485267ms) May 9 21:25:27.009: INFO: (13) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 5.831994ms) May 9 21:25:27.010: INFO: (13) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 5.835231ms) May 9 21:25:27.012: INFO: (14) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 2.416855ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 3.870583ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 4.134815ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 4.190699ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 4.325174ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 4.347967ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 4.259618ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.275105ms) May 9 21:25:27.014: INFO: (14) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.687774ms) May 9 21:25:27.015: INFO: (14) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 4.837223ms) May 9 21:25:27.015: INFO: (14) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 4.858481ms) May 9 21:25:27.015: INFO: (14) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.900352ms) May 9 21:25:27.015: INFO: (14) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 4.894335ms) May 9 21:25:27.015: INFO: (14) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 4.868035ms) May 9 21:25:27.015: INFO: (14) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test<... (200; 5.076913ms) May 9 21:25:27.017: INFO: (15) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 2.351219ms) May 9 21:25:27.018: INFO: (15) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 2.965952ms) May 9 21:25:27.018: INFO: (15) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.039302ms) May 9 21:25:27.018: INFO: (15) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.481863ms) May 9 21:25:27.018: INFO: (15) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.502826ms) May 9 21:25:27.018: INFO: (15) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 3.63941ms) May 9 21:25:27.018: INFO: (15) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test<... (200; 4.150492ms) May 9 21:25:27.019: INFO: (15) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.200147ms) May 9 21:25:27.023: INFO: (16) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.874704ms) May 9 21:25:27.023: INFO: (16) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.958168ms) May 9 21:25:27.023: INFO: (16) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 4.074337ms) May 9 21:25:27.023: INFO: (16) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 4.146967ms) May 9 21:25:27.023: INFO: (16) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 4.208365ms) May 9 21:25:27.023: INFO: (16) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 4.379221ms) May 9 21:25:27.023: INFO: (16) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.443034ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 4.981601ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 5.044943ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 5.094634ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 5.117449ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 5.128923ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 5.119888ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 5.255818ms) May 9 21:25:27.024: INFO: (16) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: test (200; 3.648395ms) May 9 21:25:27.028: INFO: (17) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 3.61895ms) May 9 21:25:27.028: INFO: (17) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 3.647078ms) May 9 21:25:27.028: INFO: (17) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.798713ms) May 9 21:25:27.028: INFO: (17) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.755419ms) May 9 21:25:27.028: INFO: (17) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 3.702184ms) May 9 21:25:27.028: INFO: (17) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.673192ms) May 9 21:25:27.028: INFO: (17) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.992025ms) May 9 21:25:27.031: INFO: (17) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname1/proxy/: foo (200; 6.34306ms) May 9 21:25:27.031: INFO: (17) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 6.331551ms) May 9 21:25:27.031: INFO: (17) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 6.311976ms) May 9 21:25:27.031: INFO: (17) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 6.363033ms) May 9 21:25:27.031: INFO: (17) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 6.407532ms) May 9 21:25:27.031: INFO: (17) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 6.395066ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 2.960228ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 2.949773ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 3.028153ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 2.99586ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:1080/proxy/: ... (200; 3.044636ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:162/proxy/: bar (200; 3.000555ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:460/proxy/: tls baz (200; 3.01549ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 3.029244ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:462/proxy/: tls qux (200; 3.121222ms) May 9 21:25:27.034: INFO: (18) /api/v1/namespaces/proxy-4247/pods/https:proxy-service-m7w2d-rqmlk:443/proxy/: ... (200; 4.842691ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/services/http:proxy-service-m7w2d:portname2/proxy/: bar (200; 4.729941ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk:1080/proxy/: test<... (200; 4.780524ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/pods/proxy-service-m7w2d-rqmlk/proxy/: test (200; 4.812957ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/pods/http:proxy-service-m7w2d-rqmlk:160/proxy/: foo (200; 4.862009ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname2/proxy/: bar (200; 4.858399ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/services/proxy-service-m7w2d:portname1/proxy/: foo (200; 4.836997ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname1/proxy/: tls baz (200; 4.946163ms) May 9 21:25:27.040: INFO: (19) /api/v1/namespaces/proxy-4247/services/https:proxy-service-m7w2d:tlsportname2/proxy/: tls qux (200; 4.946385ms) STEP: deleting ReplicationController proxy-service-m7w2d in namespace proxy-4247, will wait for the garbage collector to delete the pods May 9 21:25:27.278: INFO: Deleting ReplicationController proxy-service-m7w2d took: 185.310265ms May 9 21:25:40.778: INFO: Terminating ReplicationController proxy-service-m7w2d pods took: 13.500258543s [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:25:49.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4247" for this suite. • [SLOW TEST:29.559 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":46,"skipped":688,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:25:49.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:25:49.369: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:25:49.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9344" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":47,"skipped":693,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:25:49.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 9 21:25:50.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-279' May 9 21:25:54.075: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 21:25:54.075: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 9 21:25:56.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-279' May 9 21:25:56.256: INFO: stderr: "" May 9 21:25:56.256: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:25:56.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-279" for this suite. • [SLOW TEST:6.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1483 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":48,"skipped":711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:25:56.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:25:57.806: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:25:59.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656357, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656357, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656358, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656357, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:26:03.025: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:26:03.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4614" for this suite. STEP: Destroying namespace "webhook-4614-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.955 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":49,"skipped":734,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:26:03.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 9 21:26:03.339: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9371 /api/v1/namespaces/watch-9371/configmaps/e2e-watch-test-label-changed f53431cb-e518-4e00-93ef-98fa1638ca38 14797577 0 2020-05-09 21:26:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 21:26:03.339: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9371 /api/v1/namespaces/watch-9371/configmaps/e2e-watch-test-label-changed f53431cb-e518-4e00-93ef-98fa1638ca38 14797578 0 2020-05-09 21:26:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 9 21:26:03.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9371 /api/v1/namespaces/watch-9371/configmaps/e2e-watch-test-label-changed f53431cb-e518-4e00-93ef-98fa1638ca38 14797579 0 2020-05-09 21:26:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 9 21:26:13.397: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9371 /api/v1/namespaces/watch-9371/configmaps/e2e-watch-test-label-changed f53431cb-e518-4e00-93ef-98fa1638ca38 14797627 0 2020-05-09 21:26:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 21:26:13.397: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9371 /api/v1/namespaces/watch-9371/configmaps/e2e-watch-test-label-changed f53431cb-e518-4e00-93ef-98fa1638ca38 14797628 0 2020-05-09 21:26:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 9 21:26:13.397: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9371 /api/v1/namespaces/watch-9371/configmaps/e2e-watch-test-label-changed f53431cb-e518-4e00-93ef-98fa1638ca38 14797629 0 2020-05-09 21:26:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:26:13.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9371" for this suite. • [SLOW TEST:10.183 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":50,"skipped":742,"failed":0} [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:26:13.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 9 21:26:13.446: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 9 21:26:13.961: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 9 21:26:16.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656373, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656373, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656374, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656373, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:26:18.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656373, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656373, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656374, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656373, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:26:20.748: INFO: Waited 524.414945ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:26:21.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3123" for this suite. • [SLOW TEST:8.066 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":51,"skipped":742,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:26:21.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7412 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7412 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7412 May 9 21:26:21.881: INFO: Found 0 stateful pods, waiting for 1 May 9 21:26:31.885: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 9 21:26:31.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:26:32.199: INFO: stderr: "I0509 21:26:32.074165 1104 log.go:172] (0xc000440160) (0xc0005f08c0) Create stream\nI0509 21:26:32.074250 1104 log.go:172] (0xc000440160) (0xc0005f08c0) Stream added, broadcasting: 1\nI0509 21:26:32.076974 1104 log.go:172] (0xc000440160) Reply frame received for 1\nI0509 21:26:32.077031 1104 log.go:172] (0xc000440160) (0xc0008dc000) Create stream\nI0509 21:26:32.077050 1104 log.go:172] (0xc000440160) (0xc0008dc000) Stream added, broadcasting: 3\nI0509 21:26:32.078312 1104 log.go:172] (0xc000440160) Reply frame received for 3\nI0509 21:26:32.078353 1104 log.go:172] (0xc000440160) (0xc0008dc140) Create stream\nI0509 21:26:32.078376 1104 log.go:172] (0xc000440160) (0xc0008dc140) Stream added, broadcasting: 5\nI0509 21:26:32.079318 1104 log.go:172] (0xc000440160) Reply frame received for 5\nI0509 21:26:32.156048 1104 log.go:172] (0xc000440160) Data frame received for 5\nI0509 21:26:32.156074 1104 log.go:172] (0xc0008dc140) (5) Data frame handling\nI0509 21:26:32.156089 1104 log.go:172] (0xc0008dc140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:26:32.190897 1104 log.go:172] (0xc000440160) Data frame received for 3\nI0509 21:26:32.190964 1104 log.go:172] (0xc0008dc000) (3) Data frame handling\nI0509 21:26:32.190984 1104 log.go:172] (0xc0008dc000) (3) Data frame sent\nI0509 21:26:32.191185 1104 log.go:172] (0xc000440160) Data frame received for 3\nI0509 21:26:32.191205 1104 log.go:172] (0xc0008dc000) (3) Data frame handling\nI0509 21:26:32.191218 1104 log.go:172] (0xc000440160) Data frame received for 5\nI0509 21:26:32.191223 1104 log.go:172] (0xc0008dc140) (5) Data frame handling\nI0509 21:26:32.193383 1104 log.go:172] (0xc000440160) Data frame received for 1\nI0509 21:26:32.193436 1104 log.go:172] (0xc0005f08c0) (1) Data frame handling\nI0509 21:26:32.193482 1104 log.go:172] (0xc0005f08c0) (1) Data frame sent\nI0509 21:26:32.193506 1104 log.go:172] (0xc000440160) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0509 21:26:32.193556 1104 log.go:172] (0xc000440160) Go away received\nI0509 21:26:32.193981 1104 log.go:172] (0xc000440160) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0509 21:26:32.194002 1104 log.go:172] (0xc000440160) (0xc0008dc000) Stream removed, broadcasting: 3\nI0509 21:26:32.194020 1104 log.go:172] (0xc000440160) (0xc0008dc140) Stream removed, broadcasting: 5\n" May 9 21:26:32.199: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:26:32.199: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:26:32.203: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 9 21:26:42.216: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 21:26:42.216: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:26:42.249: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999378s May 9 21:26:43.253: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.974669581s May 9 21:26:44.258: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.971047111s May 9 21:26:45.262: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.966354644s May 9 21:26:46.266: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.961925764s May 9 21:26:47.271: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.957487269s May 9 21:26:48.276: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.952933431s May 9 21:26:49.294: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.948458749s May 9 21:26:50.298: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.929544077s May 9 21:26:51.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 925.706176ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7412 May 9 21:26:52.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:26:52.566: INFO: stderr: "I0509 21:26:52.463739 1124 log.go:172] (0xc000a34160) (0xc0007854a0) Create stream\nI0509 21:26:52.463796 1124 log.go:172] (0xc000a34160) (0xc0007854a0) Stream added, broadcasting: 1\nI0509 21:26:52.466650 1124 log.go:172] (0xc000a34160) Reply frame received for 1\nI0509 21:26:52.466695 1124 log.go:172] (0xc000a34160) (0xc000a0a000) Create stream\nI0509 21:26:52.466709 1124 log.go:172] (0xc000a34160) (0xc000a0a000) Stream added, broadcasting: 3\nI0509 21:26:52.467805 1124 log.go:172] (0xc000a34160) Reply frame received for 3\nI0509 21:26:52.467848 1124 log.go:172] (0xc000a34160) (0xc000707ae0) Create stream\nI0509 21:26:52.467862 1124 log.go:172] (0xc000a34160) (0xc000707ae0) Stream added, broadcasting: 5\nI0509 21:26:52.468731 1124 log.go:172] (0xc000a34160) Reply frame received for 5\nI0509 21:26:52.559948 1124 log.go:172] (0xc000a34160) Data frame received for 5\nI0509 21:26:52.559991 1124 log.go:172] (0xc000707ae0) (5) Data frame handling\nI0509 21:26:52.560002 1124 log.go:172] (0xc000707ae0) (5) Data frame sent\nI0509 21:26:52.560010 1124 log.go:172] (0xc000a34160) Data frame received for 5\nI0509 21:26:52.560016 1124 log.go:172] (0xc000707ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0509 21:26:52.560034 1124 log.go:172] (0xc000a34160) Data frame received for 3\nI0509 21:26:52.560041 1124 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0509 21:26:52.560054 1124 log.go:172] (0xc000a0a000) (3) Data frame sent\nI0509 21:26:52.560077 1124 log.go:172] (0xc000a34160) Data frame received for 3\nI0509 21:26:52.560085 1124 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0509 21:26:52.561546 1124 log.go:172] (0xc000a34160) Data frame received for 1\nI0509 21:26:52.561615 1124 log.go:172] (0xc0007854a0) (1) Data frame handling\nI0509 21:26:52.561649 1124 log.go:172] (0xc0007854a0) (1) Data frame sent\nI0509 21:26:52.561668 1124 log.go:172] (0xc000a34160) (0xc0007854a0) Stream removed, broadcasting: 1\nI0509 21:26:52.561686 1124 log.go:172] (0xc000a34160) Go away received\nI0509 21:26:52.562088 1124 log.go:172] (0xc000a34160) (0xc0007854a0) Stream removed, broadcasting: 1\nI0509 21:26:52.562104 1124 log.go:172] (0xc000a34160) (0xc000a0a000) Stream removed, broadcasting: 3\nI0509 21:26:52.562111 1124 log.go:172] (0xc000a34160) (0xc000707ae0) Stream removed, broadcasting: 5\n" May 9 21:26:52.566: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:26:52.566: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:26:52.569: INFO: Found 1 stateful pods, waiting for 3 May 9 21:27:02.575: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 9 21:27:02.575: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 9 21:27:02.575: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 9 21:27:02.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:27:02.832: INFO: stderr: "I0509 21:27:02.726731 1146 log.go:172] (0xc00096a6e0) (0xc000a8a000) Create stream\nI0509 21:27:02.726788 1146 log.go:172] (0xc00096a6e0) (0xc000a8a000) Stream added, broadcasting: 1\nI0509 21:27:02.729599 1146 log.go:172] (0xc00096a6e0) Reply frame received for 1\nI0509 21:27:02.729656 1146 log.go:172] (0xc00096a6e0) (0xc00067fc20) Create stream\nI0509 21:27:02.729691 1146 log.go:172] (0xc00096a6e0) (0xc00067fc20) Stream added, broadcasting: 3\nI0509 21:27:02.730801 1146 log.go:172] (0xc00096a6e0) Reply frame received for 3\nI0509 21:27:02.730839 1146 log.go:172] (0xc00096a6e0) (0xc000a8a0a0) Create stream\nI0509 21:27:02.730852 1146 log.go:172] (0xc00096a6e0) (0xc000a8a0a0) Stream added, broadcasting: 5\nI0509 21:27:02.731979 1146 log.go:172] (0xc00096a6e0) Reply frame received for 5\nI0509 21:27:02.824740 1146 log.go:172] (0xc00096a6e0) Data frame received for 5\nI0509 21:27:02.824800 1146 log.go:172] (0xc000a8a0a0) (5) Data frame handling\nI0509 21:27:02.824818 1146 log.go:172] (0xc000a8a0a0) (5) Data frame sent\nI0509 21:27:02.824832 1146 log.go:172] (0xc00096a6e0) Data frame received for 5\nI0509 21:27:02.824842 1146 log.go:172] (0xc000a8a0a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:27:02.824898 1146 log.go:172] (0xc00096a6e0) Data frame received for 3\nI0509 21:27:02.824944 1146 log.go:172] (0xc00067fc20) (3) Data frame handling\nI0509 21:27:02.824976 1146 log.go:172] (0xc00067fc20) (3) Data frame sent\nI0509 21:27:02.824999 1146 log.go:172] (0xc00096a6e0) Data frame received for 3\nI0509 21:27:02.825018 1146 log.go:172] (0xc00067fc20) (3) Data frame handling\nI0509 21:27:02.826778 1146 log.go:172] (0xc00096a6e0) Data frame received for 1\nI0509 21:27:02.826797 1146 log.go:172] (0xc000a8a000) (1) Data frame handling\nI0509 21:27:02.826813 1146 log.go:172] (0xc000a8a000) (1) Data frame sent\nI0509 21:27:02.826832 1146 log.go:172] (0xc00096a6e0) (0xc000a8a000) Stream removed, broadcasting: 1\nI0509 21:27:02.826861 1146 log.go:172] (0xc00096a6e0) Go away received\nI0509 21:27:02.827259 1146 log.go:172] (0xc00096a6e0) (0xc000a8a000) Stream removed, broadcasting: 1\nI0509 21:27:02.827283 1146 log.go:172] (0xc00096a6e0) (0xc00067fc20) Stream removed, broadcasting: 3\nI0509 21:27:02.827298 1146 log.go:172] (0xc00096a6e0) (0xc000a8a0a0) Stream removed, broadcasting: 5\n" May 9 21:27:02.832: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:27:02.832: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:27:02.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:27:03.062: INFO: stderr: "I0509 21:27:02.959557 1169 log.go:172] (0xc000204dc0) (0xc0006c3ae0) Create stream\nI0509 21:27:02.959621 1169 log.go:172] (0xc000204dc0) (0xc0006c3ae0) Stream added, broadcasting: 1\nI0509 21:27:02.962388 1169 log.go:172] (0xc000204dc0) Reply frame received for 1\nI0509 21:27:02.962428 1169 log.go:172] (0xc000204dc0) (0xc000a0a000) Create stream\nI0509 21:27:02.962443 1169 log.go:172] (0xc000204dc0) (0xc000a0a000) Stream added, broadcasting: 3\nI0509 21:27:02.963681 1169 log.go:172] (0xc000204dc0) Reply frame received for 3\nI0509 21:27:02.963720 1169 log.go:172] (0xc000204dc0) (0xc000402000) Create stream\nI0509 21:27:02.963734 1169 log.go:172] (0xc000204dc0) (0xc000402000) Stream added, broadcasting: 5\nI0509 21:27:02.964631 1169 log.go:172] (0xc000204dc0) Reply frame received for 5\nI0509 21:27:03.028140 1169 log.go:172] (0xc000204dc0) Data frame received for 5\nI0509 21:27:03.028166 1169 log.go:172] (0xc000402000) (5) Data frame handling\nI0509 21:27:03.028182 1169 log.go:172] (0xc000402000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:27:03.054495 1169 log.go:172] (0xc000204dc0) Data frame received for 5\nI0509 21:27:03.054539 1169 log.go:172] (0xc000402000) (5) Data frame handling\nI0509 21:27:03.054575 1169 log.go:172] (0xc000204dc0) Data frame received for 3\nI0509 21:27:03.054608 1169 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0509 21:27:03.054643 1169 log.go:172] (0xc000a0a000) (3) Data frame sent\nI0509 21:27:03.054681 1169 log.go:172] (0xc000204dc0) Data frame received for 3\nI0509 21:27:03.054706 1169 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0509 21:27:03.056534 1169 log.go:172] (0xc000204dc0) Data frame received for 1\nI0509 21:27:03.056564 1169 log.go:172] (0xc0006c3ae0) (1) Data frame handling\nI0509 21:27:03.056579 1169 log.go:172] (0xc0006c3ae0) (1) Data frame sent\nI0509 21:27:03.056610 1169 log.go:172] (0xc000204dc0) (0xc0006c3ae0) Stream removed, broadcasting: 1\nI0509 21:27:03.057044 1169 log.go:172] (0xc000204dc0) (0xc0006c3ae0) Stream removed, broadcasting: 1\nI0509 21:27:03.057097 1169 log.go:172] (0xc000204dc0) (0xc000a0a000) Stream removed, broadcasting: 3\nI0509 21:27:03.057328 1169 log.go:172] (0xc000204dc0) (0xc000402000) Stream removed, broadcasting: 5\n" May 9 21:27:03.062: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:27:03.062: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:27:03.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:27:03.314: INFO: stderr: "I0509 21:27:03.205881 1190 log.go:172] (0xc0000e31e0) (0xc0006f19a0) Create stream\nI0509 21:27:03.205951 1190 log.go:172] (0xc0000e31e0) (0xc0006f19a0) Stream added, broadcasting: 1\nI0509 21:27:03.208744 1190 log.go:172] (0xc0000e31e0) Reply frame received for 1\nI0509 21:27:03.208803 1190 log.go:172] (0xc0000e31e0) (0xc000a9a000) Create stream\nI0509 21:27:03.208821 1190 log.go:172] (0xc0000e31e0) (0xc000a9a000) Stream added, broadcasting: 3\nI0509 21:27:03.210356 1190 log.go:172] (0xc0000e31e0) Reply frame received for 3\nI0509 21:27:03.210397 1190 log.go:172] (0xc0000e31e0) (0xc000a9a0a0) Create stream\nI0509 21:27:03.210408 1190 log.go:172] (0xc0000e31e0) (0xc000a9a0a0) Stream added, broadcasting: 5\nI0509 21:27:03.211582 1190 log.go:172] (0xc0000e31e0) Reply frame received for 5\nI0509 21:27:03.263642 1190 log.go:172] (0xc0000e31e0) Data frame received for 5\nI0509 21:27:03.263672 1190 log.go:172] (0xc000a9a0a0) (5) Data frame handling\nI0509 21:27:03.263689 1190 log.go:172] (0xc000a9a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:27:03.307449 1190 log.go:172] (0xc0000e31e0) Data frame received for 3\nI0509 21:27:03.307496 1190 log.go:172] (0xc000a9a000) (3) Data frame handling\nI0509 21:27:03.307532 1190 log.go:172] (0xc000a9a000) (3) Data frame sent\nI0509 21:27:03.307662 1190 log.go:172] (0xc0000e31e0) Data frame received for 3\nI0509 21:27:03.307734 1190 log.go:172] (0xc000a9a000) (3) Data frame handling\nI0509 21:27:03.307780 1190 log.go:172] (0xc0000e31e0) Data frame received for 5\nI0509 21:27:03.307802 1190 log.go:172] (0xc000a9a0a0) (5) Data frame handling\nI0509 21:27:03.309807 1190 log.go:172] (0xc0000e31e0) Data frame received for 1\nI0509 21:27:03.309822 1190 log.go:172] (0xc0006f19a0) (1) Data frame handling\nI0509 21:27:03.309831 1190 log.go:172] (0xc0006f19a0) (1) Data frame sent\nI0509 21:27:03.309907 1190 log.go:172] (0xc0000e31e0) (0xc0006f19a0) Stream removed, broadcasting: 1\nI0509 21:27:03.309994 1190 log.go:172] (0xc0000e31e0) Go away received\nI0509 21:27:03.310150 1190 log.go:172] (0xc0000e31e0) (0xc0006f19a0) Stream removed, broadcasting: 1\nI0509 21:27:03.310168 1190 log.go:172] (0xc0000e31e0) (0xc000a9a000) Stream removed, broadcasting: 3\nI0509 21:27:03.310176 1190 log.go:172] (0xc0000e31e0) (0xc000a9a0a0) Stream removed, broadcasting: 5\n" May 9 21:27:03.315: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:27:03.315: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:27:03.315: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:27:03.324: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 9 21:27:13.331: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 21:27:13.331: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 9 21:27:13.331: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 9 21:27:13.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999768s May 9 21:27:14.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993478213s May 9 21:27:15.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988711388s May 9 21:27:16.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983565502s May 9 21:27:17.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978537905s May 9 21:27:18.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.96829424s May 9 21:27:19.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963217429s May 9 21:27:20.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.933601836s May 9 21:27:21.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.929469196s May 9 21:27:22.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 925.143826ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7412 May 9 21:27:23.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:27:23.654: INFO: stderr: "I0509 21:27:23.556147 1210 log.go:172] (0xc0001042c0) (0xc00030d400) Create stream\nI0509 21:27:23.556212 1210 log.go:172] (0xc0001042c0) (0xc00030d400) Stream added, broadcasting: 1\nI0509 21:27:23.559391 1210 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0509 21:27:23.559455 1210 log.go:172] (0xc0001042c0) (0xc000665a40) Create stream\nI0509 21:27:23.559471 1210 log.go:172] (0xc0001042c0) (0xc000665a40) Stream added, broadcasting: 3\nI0509 21:27:23.560517 1210 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0509 21:27:23.560555 1210 log.go:172] (0xc0001042c0) (0xc00094a0a0) Create stream\nI0509 21:27:23.560564 1210 log.go:172] (0xc0001042c0) (0xc00094a0a0) Stream added, broadcasting: 5\nI0509 21:27:23.561707 1210 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0509 21:27:23.646096 1210 log.go:172] (0xc0001042c0) Data frame received for 5\nI0509 21:27:23.646146 1210 log.go:172] (0xc0001042c0) Data frame received for 3\nI0509 21:27:23.646169 1210 log.go:172] (0xc000665a40) (3) Data frame handling\nI0509 21:27:23.646190 1210 log.go:172] (0xc000665a40) (3) Data frame sent\nI0509 21:27:23.646202 1210 log.go:172] (0xc0001042c0) Data frame received for 3\nI0509 21:27:23.646212 1210 log.go:172] (0xc000665a40) (3) Data frame handling\nI0509 21:27:23.646244 1210 log.go:172] (0xc00094a0a0) (5) Data frame handling\nI0509 21:27:23.646257 1210 log.go:172] (0xc00094a0a0) (5) Data frame sent\nI0509 21:27:23.646268 1210 log.go:172] (0xc0001042c0) Data frame received for 5\nI0509 21:27:23.646280 1210 log.go:172] (0xc00094a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0509 21:27:23.648339 1210 log.go:172] (0xc0001042c0) Data frame received for 1\nI0509 21:27:23.648361 1210 log.go:172] (0xc00030d400) (1) Data frame handling\nI0509 21:27:23.648378 1210 log.go:172] (0xc00030d400) (1) Data frame sent\nI0509 21:27:23.648395 1210 log.go:172] (0xc0001042c0) (0xc00030d400) Stream removed, broadcasting: 1\nI0509 21:27:23.648411 1210 log.go:172] (0xc0001042c0) Go away received\nI0509 21:27:23.648848 1210 log.go:172] (0xc0001042c0) (0xc00030d400) Stream removed, broadcasting: 1\nI0509 21:27:23.648870 1210 log.go:172] (0xc0001042c0) (0xc000665a40) Stream removed, broadcasting: 3\nI0509 21:27:23.648882 1210 log.go:172] (0xc0001042c0) (0xc00094a0a0) Stream removed, broadcasting: 5\n" May 9 21:27:23.654: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:27:23.654: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:27:23.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:27:23.875: INFO: stderr: "I0509 21:27:23.790973 1231 log.go:172] (0xc0000f4d10) (0xc00074a780) Create stream\nI0509 21:27:23.791048 1231 log.go:172] (0xc0000f4d10) (0xc00074a780) Stream added, broadcasting: 1\nI0509 21:27:23.793403 1231 log.go:172] (0xc0000f4d10) Reply frame received for 1\nI0509 21:27:23.793451 1231 log.go:172] (0xc0000f4d10) (0xc00074a8c0) Create stream\nI0509 21:27:23.793463 1231 log.go:172] (0xc0000f4d10) (0xc00074a8c0) Stream added, broadcasting: 3\nI0509 21:27:23.794412 1231 log.go:172] (0xc0000f4d10) Reply frame received for 3\nI0509 21:27:23.794455 1231 log.go:172] (0xc0000f4d10) (0xc0007c6000) Create stream\nI0509 21:27:23.794481 1231 log.go:172] (0xc0000f4d10) (0xc0007c6000) Stream added, broadcasting: 5\nI0509 21:27:23.795347 1231 log.go:172] (0xc0000f4d10) Reply frame received for 5\nI0509 21:27:23.869817 1231 log.go:172] (0xc0000f4d10) Data frame received for 5\nI0509 21:27:23.869864 1231 log.go:172] (0xc0007c6000) (5) Data frame handling\nI0509 21:27:23.869877 1231 log.go:172] (0xc0007c6000) (5) Data frame sent\nI0509 21:27:23.869887 1231 log.go:172] (0xc0000f4d10) Data frame received for 5\nI0509 21:27:23.869895 1231 log.go:172] (0xc0007c6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0509 21:27:23.869937 1231 log.go:172] (0xc0000f4d10) Data frame received for 3\nI0509 21:27:23.869956 1231 log.go:172] (0xc00074a8c0) (3) Data frame handling\nI0509 21:27:23.869978 1231 log.go:172] (0xc00074a8c0) (3) Data frame sent\nI0509 21:27:23.869996 1231 log.go:172] (0xc0000f4d10) Data frame received for 3\nI0509 21:27:23.870004 1231 log.go:172] (0xc00074a8c0) (3) Data frame handling\nI0509 21:27:23.871317 1231 log.go:172] (0xc0000f4d10) Data frame received for 1\nI0509 21:27:23.871339 1231 log.go:172] (0xc00074a780) (1) Data frame handling\nI0509 21:27:23.871354 1231 log.go:172] (0xc00074a780) (1) Data frame sent\nI0509 21:27:23.871372 1231 log.go:172] (0xc0000f4d10) (0xc00074a780) Stream removed, broadcasting: 1\nI0509 21:27:23.871398 1231 log.go:172] (0xc0000f4d10) Go away received\nI0509 21:27:23.871702 1231 log.go:172] (0xc0000f4d10) (0xc00074a780) Stream removed, broadcasting: 1\nI0509 21:27:23.871717 1231 log.go:172] (0xc0000f4d10) (0xc00074a8c0) Stream removed, broadcasting: 3\nI0509 21:27:23.871723 1231 log.go:172] (0xc0000f4d10) (0xc0007c6000) Stream removed, broadcasting: 5\n" May 9 21:27:23.875: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:27:23.875: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:27:23.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:27:24.101: INFO: stderr: "I0509 21:27:24.026504 1253 log.go:172] (0xc000a2f3f0) (0xc000a7a780) Create stream\nI0509 21:27:24.026562 1253 log.go:172] (0xc000a2f3f0) (0xc000a7a780) Stream added, broadcasting: 1\nI0509 21:27:24.030960 1253 log.go:172] (0xc000a2f3f0) Reply frame received for 1\nI0509 21:27:24.031030 1253 log.go:172] (0xc000a2f3f0) (0xc00064e5a0) Create stream\nI0509 21:27:24.031055 1253 log.go:172] (0xc000a2f3f0) (0xc00064e5a0) Stream added, broadcasting: 3\nI0509 21:27:24.032081 1253 log.go:172] (0xc000a2f3f0) Reply frame received for 3\nI0509 21:27:24.032119 1253 log.go:172] (0xc000a2f3f0) (0xc00077f360) Create stream\nI0509 21:27:24.032131 1253 log.go:172] (0xc000a2f3f0) (0xc00077f360) Stream added, broadcasting: 5\nI0509 21:27:24.033014 1253 log.go:172] (0xc000a2f3f0) Reply frame received for 5\nI0509 21:27:24.095006 1253 log.go:172] (0xc000a2f3f0) Data frame received for 5\nI0509 21:27:24.095062 1253 log.go:172] (0xc00077f360) (5) Data frame handling\nI0509 21:27:24.095089 1253 log.go:172] (0xc00077f360) (5) Data frame sent\nI0509 21:27:24.095111 1253 log.go:172] (0xc000a2f3f0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0509 21:27:24.095141 1253 log.go:172] (0xc00077f360) (5) Data frame handling\nI0509 21:27:24.095204 1253 log.go:172] (0xc000a2f3f0) Data frame received for 3\nI0509 21:27:24.095255 1253 log.go:172] (0xc00064e5a0) (3) Data frame handling\nI0509 21:27:24.095308 1253 log.go:172] (0xc00064e5a0) (3) Data frame sent\nI0509 21:27:24.095336 1253 log.go:172] (0xc000a2f3f0) Data frame received for 3\nI0509 21:27:24.095361 1253 log.go:172] (0xc00064e5a0) (3) Data frame handling\nI0509 21:27:24.096852 1253 log.go:172] (0xc000a2f3f0) Data frame received for 1\nI0509 21:27:24.096883 1253 log.go:172] (0xc000a7a780) (1) Data frame handling\nI0509 21:27:24.096902 1253 log.go:172] (0xc000a7a780) (1) Data frame sent\nI0509 21:27:24.096927 1253 log.go:172] (0xc000a2f3f0) (0xc000a7a780) Stream removed, broadcasting: 1\nI0509 21:27:24.096991 1253 log.go:172] (0xc000a2f3f0) Go away received\nI0509 21:27:24.097572 1253 log.go:172] (0xc000a2f3f0) (0xc000a7a780) Stream removed, broadcasting: 1\nI0509 21:27:24.097599 1253 log.go:172] (0xc000a2f3f0) (0xc00064e5a0) Stream removed, broadcasting: 3\nI0509 21:27:24.097610 1253 log.go:172] (0xc000a2f3f0) (0xc00077f360) Stream removed, broadcasting: 5\n" May 9 21:27:24.101: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:27:24.101: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:27:24.101: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 9 21:27:44.118: INFO: Deleting all statefulset in ns statefulset-7412 May 9 21:27:44.122: INFO: Scaling statefulset ss to 0 May 9 21:27:44.129: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:27:44.131: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:27:44.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7412" for this suite. • [SLOW TEST:82.682 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":52,"skipped":754,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:27:44.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-692 STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 21:27:44.212: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 21:28:08.312: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.8 8081 | grep -v '^\s*$'] Namespace:pod-network-test-692 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:28:08.312: INFO: >>> kubeConfig: /root/.kube/config I0509 21:28:08.345024 7 log.go:172] (0xc0007a04d0) (0xc0016d1540) Create stream I0509 21:28:08.345062 7 log.go:172] (0xc0007a04d0) (0xc0016d1540) Stream added, broadcasting: 1 I0509 21:28:08.347070 7 log.go:172] (0xc0007a04d0) Reply frame received for 1 I0509 21:28:08.347099 7 log.go:172] (0xc0007a04d0) (0xc0016d15e0) Create stream I0509 21:28:08.347109 7 log.go:172] (0xc0007a04d0) (0xc0016d15e0) Stream added, broadcasting: 3 I0509 21:28:08.348079 7 log.go:172] (0xc0007a04d0) Reply frame received for 3 I0509 21:28:08.348107 7 log.go:172] (0xc0007a04d0) (0xc001e40820) Create stream I0509 21:28:08.348117 7 log.go:172] (0xc0007a04d0) (0xc001e40820) Stream added, broadcasting: 5 I0509 21:28:08.348776 7 log.go:172] (0xc0007a04d0) Reply frame received for 5 I0509 21:28:09.425049 7 log.go:172] (0xc0007a04d0) Data frame received for 3 I0509 21:28:09.425071 7 log.go:172] (0xc0016d15e0) (3) Data frame handling I0509 21:28:09.425084 7 log.go:172] (0xc0016d15e0) (3) Data frame sent I0509 21:28:09.425650 7 log.go:172] (0xc0007a04d0) Data frame received for 5 I0509 21:28:09.425670 7 log.go:172] (0xc001e40820) (5) Data frame handling I0509 21:28:09.425690 7 log.go:172] (0xc0007a04d0) Data frame received for 3 I0509 21:28:09.425700 7 log.go:172] (0xc0016d15e0) (3) Data frame handling I0509 21:28:09.427404 7 log.go:172] (0xc0007a04d0) Data frame received for 1 I0509 21:28:09.427435 7 log.go:172] (0xc0016d1540) (1) Data frame handling I0509 21:28:09.427473 7 log.go:172] (0xc0016d1540) (1) Data frame sent I0509 21:28:09.427499 7 log.go:172] (0xc0007a04d0) (0xc0016d1540) Stream removed, broadcasting: 1 I0509 21:28:09.427527 7 log.go:172] (0xc0007a04d0) Go away received I0509 21:28:09.427736 7 log.go:172] (0xc0007a04d0) (0xc0016d1540) Stream removed, broadcasting: 1 I0509 21:28:09.427763 7 log.go:172] (0xc0007a04d0) (0xc0016d15e0) Stream removed, broadcasting: 3 I0509 21:28:09.427780 7 log.go:172] (0xc0007a04d0) (0xc001e40820) Stream removed, broadcasting: 5 May 9 21:28:09.427: INFO: Found all expected endpoints: [netserver-0] May 9 21:28:09.431: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.157 8081 | grep -v '^\s*$'] Namespace:pod-network-test-692 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:28:09.431: INFO: >>> kubeConfig: /root/.kube/config I0509 21:28:09.463173 7 log.go:172] (0xc0008fe160) (0xc0028fe8c0) Create stream I0509 21:28:09.463200 7 log.go:172] (0xc0008fe160) (0xc0028fe8c0) Stream added, broadcasting: 1 I0509 21:28:09.465380 7 log.go:172] (0xc0008fe160) Reply frame received for 1 I0509 21:28:09.465414 7 log.go:172] (0xc0008fe160) (0xc001916fa0) Create stream I0509 21:28:09.465425 7 log.go:172] (0xc0008fe160) (0xc001916fa0) Stream added, broadcasting: 3 I0509 21:28:09.466531 7 log.go:172] (0xc0008fe160) Reply frame received for 3 I0509 21:28:09.466552 7 log.go:172] (0xc0008fe160) (0xc001e408c0) Create stream I0509 21:28:09.466558 7 log.go:172] (0xc0008fe160) (0xc001e408c0) Stream added, broadcasting: 5 I0509 21:28:09.467555 7 log.go:172] (0xc0008fe160) Reply frame received for 5 I0509 21:28:10.555758 7 log.go:172] (0xc0008fe160) Data frame received for 3 I0509 21:28:10.555872 7 log.go:172] (0xc001916fa0) (3) Data frame handling I0509 21:28:10.555922 7 log.go:172] (0xc001916fa0) (3) Data frame sent I0509 21:28:10.556074 7 log.go:172] (0xc0008fe160) Data frame received for 3 I0509 21:28:10.556111 7 log.go:172] (0xc001916fa0) (3) Data frame handling I0509 21:28:10.556396 7 log.go:172] (0xc0008fe160) Data frame received for 5 I0509 21:28:10.556425 7 log.go:172] (0xc001e408c0) (5) Data frame handling I0509 21:28:10.558773 7 log.go:172] (0xc0008fe160) Data frame received for 1 I0509 21:28:10.558814 7 log.go:172] (0xc0028fe8c0) (1) Data frame handling I0509 21:28:10.558843 7 log.go:172] (0xc0028fe8c0) (1) Data frame sent I0509 21:28:10.558871 7 log.go:172] (0xc0008fe160) (0xc0028fe8c0) Stream removed, broadcasting: 1 I0509 21:28:10.558897 7 log.go:172] (0xc0008fe160) Go away received I0509 21:28:10.559074 7 log.go:172] (0xc0008fe160) (0xc0028fe8c0) Stream removed, broadcasting: 1 I0509 21:28:10.559096 7 log.go:172] (0xc0008fe160) (0xc001916fa0) Stream removed, broadcasting: 3 I0509 21:28:10.559108 7 log.go:172] (0xc0008fe160) (0xc001e408c0) Stream removed, broadcasting: 5 May 9 21:28:10.559: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:28:10.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-692" for this suite. • [SLOW TEST:26.428 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":769,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:28:10.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 9 21:28:14.734: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 9 21:28:29.833: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:28:29.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-739" for this suite. • [SLOW TEST:19.264 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":54,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:28:29.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-83d5c025-01c1-4103-8add-cb5eea6ef3d8 STEP: Creating a pod to test consume configMaps May 9 21:28:30.000: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776" in namespace "projected-3778" to be "success or failure" May 9 21:28:30.004: INFO: Pod "pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776": Phase="Pending", Reason="", readiness=false. Elapsed: 3.815299ms May 9 21:28:32.032: INFO: Pod "pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031909477s May 9 21:28:34.036: INFO: Pod "pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035645416s May 9 21:28:36.040: INFO: Pod "pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040063258s STEP: Saw pod success May 9 21:28:36.040: INFO: Pod "pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776" satisfied condition "success or failure" May 9 21:28:36.043: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776 container projected-configmap-volume-test: STEP: delete the pod May 9 21:28:36.090: INFO: Waiting for pod pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776 to disappear May 9 21:28:36.115: INFO: Pod pod-projected-configmaps-99ab7238-aea2-478d-9bc3-d8a63adc9776 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:28:36.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3778" for this suite. • [SLOW TEST:6.275 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":814,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:28:36.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 9 21:28:36.168: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 9 21:28:36.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9123' May 9 21:28:36.531: INFO: stderr: "" May 9 21:28:36.531: INFO: stdout: "service/agnhost-slave created\n" May 9 21:28:36.532: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 9 21:28:36.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9123' May 9 21:28:36.784: INFO: stderr: "" May 9 21:28:36.784: INFO: stdout: "service/agnhost-master created\n" May 9 21:28:36.785: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 9 21:28:36.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9123' May 9 21:28:37.132: INFO: stderr: "" May 9 21:28:37.132: INFO: stdout: "service/frontend created\n" May 9 21:28:37.133: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 9 21:28:37.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9123' May 9 21:28:37.388: INFO: stderr: "" May 9 21:28:37.388: INFO: stdout: "deployment.apps/frontend created\n" May 9 21:28:37.388: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 9 21:28:37.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9123' May 9 21:28:37.684: INFO: stderr: "" May 9 21:28:37.684: INFO: stdout: "deployment.apps/agnhost-master created\n" May 9 21:28:37.684: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 9 21:28:37.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9123' May 9 21:28:37.919: INFO: stderr: "" May 9 21:28:37.919: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 9 21:28:37.919: INFO: Waiting for all frontend pods to be Running. May 9 21:28:47.970: INFO: Waiting for frontend to serve content. May 9 21:28:47.981: INFO: Trying to add a new entry to the guestbook. May 9 21:28:47.993: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 9 21:28:48.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9123' May 9 21:28:48.148: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:28:48.148: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 9 21:28:48.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9123' May 9 21:28:48.374: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:28:48.374: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 9 21:28:48.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9123' May 9 21:28:48.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:28:48.502: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 9 21:28:48.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9123' May 9 21:28:48.616: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:28:48.616: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 9 21:28:48.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9123' May 9 21:28:48.743: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:28:48.743: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 9 21:28:48.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9123' May 9 21:28:48.870: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 21:28:48.870: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:28:48.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9123" for this suite. • [SLOW TEST:12.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":56,"skipped":831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:28:48.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 9 21:28:49.351: INFO: Created pod &Pod{ObjectMeta:{dns-8876 dns-8876 /api/v1/namespaces/dns-8876/pods/dns-8876 8793f0b1-52c6-4d67-9692-9ec3f87fbde8 14798622 0 2020-05-09 21:28:49 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fk78l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fk78l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fk78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 9 21:28:53.392: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8876 PodName:dns-8876 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:28:53.392: INFO: >>> kubeConfig: /root/.kube/config I0509 21:28:53.426252 7 log.go:172] (0xc0007a0a50) (0xc0016d1f40) Create stream I0509 21:28:53.426283 7 log.go:172] (0xc0007a0a50) (0xc0016d1f40) Stream added, broadcasting: 1 I0509 21:28:53.428598 7 log.go:172] (0xc0007a0a50) Reply frame received for 1 I0509 21:28:53.428620 7 log.go:172] (0xc0007a0a50) (0xc0012ace60) Create stream I0509 21:28:53.428629 7 log.go:172] (0xc0007a0a50) (0xc0012ace60) Stream added, broadcasting: 3 I0509 21:28:53.430007 7 log.go:172] (0xc0007a0a50) Reply frame received for 3 I0509 21:28:53.430042 7 log.go:172] (0xc0007a0a50) (0xc001324f00) Create stream I0509 21:28:53.430054 7 log.go:172] (0xc0007a0a50) (0xc001324f00) Stream added, broadcasting: 5 I0509 21:28:53.430934 7 log.go:172] (0xc0007a0a50) Reply frame received for 5 I0509 21:28:53.500980 7 log.go:172] (0xc0007a0a50) Data frame received for 3 I0509 21:28:53.501019 7 log.go:172] (0xc0012ace60) (3) Data frame handling I0509 21:28:53.501051 7 log.go:172] (0xc0012ace60) (3) Data frame sent I0509 21:28:53.503133 7 log.go:172] (0xc0007a0a50) Data frame received for 3 I0509 21:28:53.503149 7 log.go:172] (0xc0012ace60) (3) Data frame handling I0509 21:28:53.503282 7 log.go:172] (0xc0007a0a50) Data frame received for 5 I0509 21:28:53.503311 7 log.go:172] (0xc001324f00) (5) Data frame handling I0509 21:28:53.504843 7 log.go:172] (0xc0007a0a50) Data frame received for 1 I0509 21:28:53.504874 7 log.go:172] (0xc0016d1f40) (1) Data frame handling I0509 21:28:53.504888 7 log.go:172] (0xc0016d1f40) (1) Data frame sent I0509 21:28:53.504905 7 log.go:172] (0xc0007a0a50) (0xc0016d1f40) Stream removed, broadcasting: 1 I0509 21:28:53.504923 7 log.go:172] (0xc0007a0a50) Go away received I0509 21:28:53.505065 7 log.go:172] (0xc0007a0a50) (0xc0016d1f40) Stream removed, broadcasting: 1 I0509 21:28:53.505096 7 log.go:172] (0xc0007a0a50) (0xc0012ace60) Stream removed, broadcasting: 3 I0509 21:28:53.505352 7 log.go:172] (0xc0007a0a50) (0xc001324f00) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 9 21:28:53.505: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8876 PodName:dns-8876 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:28:53.505: INFO: >>> kubeConfig: /root/.kube/config I0509 21:28:53.535375 7 log.go:172] (0xc000ae8b00) (0xc00122caa0) Create stream I0509 21:28:53.535399 7 log.go:172] (0xc000ae8b00) (0xc00122caa0) Stream added, broadcasting: 1 I0509 21:28:53.537731 7 log.go:172] (0xc000ae8b00) Reply frame received for 1 I0509 21:28:53.537779 7 log.go:172] (0xc000ae8b00) (0xc0012ad180) Create stream I0509 21:28:53.537793 7 log.go:172] (0xc000ae8b00) (0xc0012ad180) Stream added, broadcasting: 3 I0509 21:28:53.538766 7 log.go:172] (0xc000ae8b00) Reply frame received for 3 I0509 21:28:53.538806 7 log.go:172] (0xc000ae8b00) (0xc0012ad220) Create stream I0509 21:28:53.538819 7 log.go:172] (0xc000ae8b00) (0xc0012ad220) Stream added, broadcasting: 5 I0509 21:28:53.539817 7 log.go:172] (0xc000ae8b00) Reply frame received for 5 I0509 21:28:53.619429 7 log.go:172] (0xc000ae8b00) Data frame received for 3 I0509 21:28:53.619473 7 log.go:172] (0xc0012ad180) (3) Data frame handling I0509 21:28:53.619494 7 log.go:172] (0xc0012ad180) (3) Data frame sent I0509 21:28:53.620419 7 log.go:172] (0xc000ae8b00) Data frame received for 5 I0509 21:28:53.620451 7 log.go:172] (0xc0012ad220) (5) Data frame handling I0509 21:28:53.620505 7 log.go:172] (0xc000ae8b00) Data frame received for 3 I0509 21:28:53.620542 7 log.go:172] (0xc0012ad180) (3) Data frame handling I0509 21:28:53.622477 7 log.go:172] (0xc000ae8b00) Data frame received for 1 I0509 21:28:53.622494 7 log.go:172] (0xc00122caa0) (1) Data frame handling I0509 21:28:53.622509 7 log.go:172] (0xc00122caa0) (1) Data frame sent I0509 21:28:53.622643 7 log.go:172] (0xc000ae8b00) (0xc00122caa0) Stream removed, broadcasting: 1 I0509 21:28:53.622715 7 log.go:172] (0xc000ae8b00) Go away received I0509 21:28:53.622752 7 log.go:172] (0xc000ae8b00) (0xc00122caa0) Stream removed, broadcasting: 1 I0509 21:28:53.622779 7 log.go:172] (0xc000ae8b00) (0xc0012ad180) Stream removed, broadcasting: 3 I0509 21:28:53.622791 7 log.go:172] (0xc000ae8b00) (0xc0012ad220) Stream removed, broadcasting: 5 May 9 21:28:53.622: INFO: Deleting pod dns-8876... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:28:53.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8876" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":57,"skipped":875,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:28:53.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a in namespace container-probe-547 May 9 21:28:57.856: INFO: Started pod liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a in namespace container-probe-547 STEP: checking the pod's current state and verifying that restartCount is present May 9 21:28:57.859: INFO: Initial restart count of pod liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a is 0 May 9 21:29:10.085: INFO: Restart count of pod container-probe-547/liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a is now 1 (12.226079084s elapsed) May 9 21:29:30.142: INFO: Restart count of pod container-probe-547/liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a is now 2 (32.283341527s elapsed) May 9 21:29:50.183: INFO: Restart count of pod container-probe-547/liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a is now 3 (52.324264643s elapsed) May 9 21:30:10.224: INFO: Restart count of pod container-probe-547/liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a is now 4 (1m12.364996633s elapsed) May 9 21:31:18.397: INFO: Restart count of pod container-probe-547/liveness-ae581503-865e-4636-b02e-b7e5a35a4a4a is now 5 (2m20.538802684s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:31:18.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-547" for this suite. • [SLOW TEST:144.752 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":880,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:31:18.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:31:22.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9367" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":883,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:31:22.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 9 21:31:22.797: INFO: Waiting up to 5m0s for pod "pod-79b06575-d856-4460-a362-a244245f97eb" in namespace "emptydir-1275" to be "success or failure" May 9 21:31:22.838: INFO: Pod "pod-79b06575-d856-4460-a362-a244245f97eb": Phase="Pending", Reason="", readiness=false. Elapsed: 41.315287ms May 9 21:31:24.842: INFO: Pod "pod-79b06575-d856-4460-a362-a244245f97eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04504473s May 9 21:31:26.846: INFO: Pod "pod-79b06575-d856-4460-a362-a244245f97eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049343078s STEP: Saw pod success May 9 21:31:26.846: INFO: Pod "pod-79b06575-d856-4460-a362-a244245f97eb" satisfied condition "success or failure" May 9 21:31:26.848: INFO: Trying to get logs from node jerma-worker pod pod-79b06575-d856-4460-a362-a244245f97eb container test-container: STEP: delete the pod May 9 21:31:26.907: INFO: Waiting for pod pod-79b06575-d856-4460-a362-a244245f97eb to disappear May 9 21:31:27.022: INFO: Pod pod-79b06575-d856-4460-a362-a244245f97eb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:31:27.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1275" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":902,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:31:27.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6502 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6502 STEP: creating replication controller externalsvc in namespace services-6502 I0509 21:31:27.263610 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6502, replica count: 2 I0509 21:31:30.314091 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:31:33.314318 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 9 21:31:33.358: INFO: Creating new exec pod May 9 21:31:37.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6502 execpodggk9t -- /bin/sh -x -c nslookup clusterip-service' May 9 21:31:37.577: INFO: stderr: "I0509 21:31:37.508411 1529 log.go:172] (0xc00056c0b0) (0xc0007b8000) Create stream\nI0509 21:31:37.508483 1529 log.go:172] (0xc00056c0b0) (0xc0007b8000) Stream added, broadcasting: 1\nI0509 21:31:37.510084 1529 log.go:172] (0xc00056c0b0) Reply frame received for 1\nI0509 21:31:37.510110 1529 log.go:172] (0xc00056c0b0) (0xc0007b80a0) Create stream\nI0509 21:31:37.510116 1529 log.go:172] (0xc00056c0b0) (0xc0007b80a0) Stream added, broadcasting: 3\nI0509 21:31:37.511013 1529 log.go:172] (0xc00056c0b0) Reply frame received for 3\nI0509 21:31:37.511049 1529 log.go:172] (0xc00056c0b0) (0xc00047a8c0) Create stream\nI0509 21:31:37.511061 1529 log.go:172] (0xc00056c0b0) (0xc00047a8c0) Stream added, broadcasting: 5\nI0509 21:31:37.511936 1529 log.go:172] (0xc00056c0b0) Reply frame received for 5\nI0509 21:31:37.562779 1529 log.go:172] (0xc00056c0b0) Data frame received for 5\nI0509 21:31:37.562797 1529 log.go:172] (0xc00047a8c0) (5) Data frame handling\nI0509 21:31:37.562807 1529 log.go:172] (0xc00047a8c0) (5) Data frame sent\n+ nslookup clusterip-service\nI0509 21:31:37.570475 1529 log.go:172] (0xc00056c0b0) Data frame received for 3\nI0509 21:31:37.570507 1529 log.go:172] (0xc0007b80a0) (3) Data frame handling\nI0509 21:31:37.570533 1529 log.go:172] (0xc0007b80a0) (3) Data frame sent\nI0509 21:31:37.571304 1529 log.go:172] (0xc00056c0b0) Data frame received for 3\nI0509 21:31:37.571317 1529 log.go:172] (0xc0007b80a0) (3) Data frame handling\nI0509 21:31:37.571326 1529 log.go:172] (0xc0007b80a0) (3) Data frame sent\nI0509 21:31:37.571886 1529 log.go:172] (0xc00056c0b0) Data frame received for 3\nI0509 21:31:37.571911 1529 log.go:172] (0xc0007b80a0) (3) Data frame handling\nI0509 21:31:37.571935 1529 log.go:172] (0xc00056c0b0) Data frame received for 5\nI0509 21:31:37.571961 1529 log.go:172] (0xc00047a8c0) (5) Data frame handling\nI0509 21:31:37.574047 1529 log.go:172] (0xc00056c0b0) Data frame received for 1\nI0509 21:31:37.574067 1529 log.go:172] (0xc0007b8000) (1) Data frame handling\nI0509 21:31:37.574087 1529 log.go:172] (0xc0007b8000) (1) Data frame sent\nI0509 21:31:37.574113 1529 log.go:172] (0xc00056c0b0) (0xc0007b8000) Stream removed, broadcasting: 1\nI0509 21:31:37.574177 1529 log.go:172] (0xc00056c0b0) Go away received\nI0509 21:31:37.574368 1529 log.go:172] (0xc00056c0b0) (0xc0007b8000) Stream removed, broadcasting: 1\nI0509 21:31:37.574383 1529 log.go:172] (0xc00056c0b0) (0xc0007b80a0) Stream removed, broadcasting: 3\nI0509 21:31:37.574393 1529 log.go:172] (0xc00056c0b0) (0xc00047a8c0) Stream removed, broadcasting: 5\n" May 9 21:31:37.577: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6502.svc.cluster.local\tcanonical name = externalsvc.services-6502.svc.cluster.local.\nName:\texternalsvc.services-6502.svc.cluster.local\nAddress: 10.110.230.255\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6502, will wait for the garbage collector to delete the pods May 9 21:31:37.637: INFO: Deleting ReplicationController externalsvc took: 6.942788ms May 9 21:31:37.938: INFO: Terminating ReplicationController externalsvc pods took: 300.287221ms May 9 21:31:49.556: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:31:49.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6502" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.583 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":61,"skipped":919,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:31:49.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:31:49.675: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a36d3832-bf16-4651-9806-986089487f0b" in namespace "security-context-test-2996" to be "success or failure" May 9 21:31:49.705: INFO: Pod "alpine-nnp-false-a36d3832-bf16-4651-9806-986089487f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.148633ms May 9 21:31:51.709: INFO: Pod "alpine-nnp-false-a36d3832-bf16-4651-9806-986089487f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034433519s May 9 21:31:53.713: INFO: Pod "alpine-nnp-false-a36d3832-bf16-4651-9806-986089487f0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038063212s May 9 21:31:53.713: INFO: Pod "alpine-nnp-false-a36d3832-bf16-4651-9806-986089487f0b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:31:53.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2996" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":933,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:31:53.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 9 21:31:54.055: INFO: Waiting up to 5m0s for pod "client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54" in namespace "containers-6046" to be "success or failure" May 9 21:31:54.096: INFO: Pod "client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54": Phase="Pending", Reason="", readiness=false. Elapsed: 41.005375ms May 9 21:31:56.100: INFO: Pod "client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044825069s May 9 21:31:58.104: INFO: Pod "client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048992935s STEP: Saw pod success May 9 21:31:58.104: INFO: Pod "client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54" satisfied condition "success or failure" May 9 21:31:58.108: INFO: Trying to get logs from node jerma-worker pod client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54 container test-container: STEP: delete the pod May 9 21:31:58.128: INFO: Waiting for pod client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54 to disappear May 9 21:31:58.131: INFO: Pod client-containers-6d24d7a3-1d3f-4c7c-bc6e-31cbd24b6a54 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:31:58.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6046" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":938,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:31:58.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0509 21:32:38.759889 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 21:32:38.759: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:32:38.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2115" for this suite. • [SLOW TEST:40.629 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":64,"skipped":947,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:32:38.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:32:38.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 9 21:32:39.026: INFO: stderr: "" May 9 21:32:39.026: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:32:39.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3517" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":65,"skipped":949,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:32:39.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 9 21:32:43.709: INFO: Successfully updated pod "pod-update-e64b6c76-2c58-49a8-b66d-85b7bc12f04f" STEP: verifying the updated pod is in kubernetes May 9 21:32:43.718: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:32:43.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6012" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:32:43.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 9 21:32:43.795: INFO: namespace kubectl-2090 May 9 21:32:43.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2090' May 9 21:32:44.055: INFO: stderr: "" May 9 21:32:44.055: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 9 21:32:45.126: INFO: Selector matched 1 pods for map[app:agnhost] May 9 21:32:45.126: INFO: Found 0 / 1 May 9 21:32:46.114: INFO: Selector matched 1 pods for map[app:agnhost] May 9 21:32:46.114: INFO: Found 0 / 1 May 9 21:32:47.072: INFO: Selector matched 1 pods for map[app:agnhost] May 9 21:32:47.072: INFO: Found 0 / 1 May 9 21:32:48.174: INFO: Selector matched 1 pods for map[app:agnhost] May 9 21:32:48.174: INFO: Found 0 / 1 May 9 21:32:49.059: INFO: Selector matched 1 pods for map[app:agnhost] May 9 21:32:49.059: INFO: Found 0 / 1 May 9 21:32:50.187: INFO: Selector matched 1 pods for map[app:agnhost] May 9 21:32:50.187: INFO: Found 1 / 1 May 9 21:32:50.187: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 9 21:32:50.190: INFO: Selector matched 1 pods for map[app:agnhost] May 9 21:32:50.190: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 9 21:32:50.190: INFO: wait on agnhost-master startup in kubectl-2090 May 9 21:32:50.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-6qhzx agnhost-master --namespace=kubectl-2090' May 9 21:32:50.310: INFO: stderr: "" May 9 21:32:50.310: INFO: stdout: "Paused\n" STEP: exposing RC May 9 21:32:50.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2090' May 9 21:32:50.561: INFO: stderr: "" May 9 21:32:50.561: INFO: stdout: "service/rm2 exposed\n" May 9 21:32:50.564: INFO: Service rm2 in namespace kubectl-2090 found. STEP: exposing service May 9 21:32:52.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2090' May 9 21:32:52.703: INFO: stderr: "" May 9 21:32:52.703: INFO: stdout: "service/rm3 exposed\n" May 9 21:32:52.711: INFO: Service rm3 in namespace kubectl-2090 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:32:54.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2090" for this suite. • [SLOW TEST:10.999 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":67,"skipped":986,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:32:54.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:32:55.760: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:32:57.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656775, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656775, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656775, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656775, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:33:00.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:33:01.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4246" for this suite. STEP: Destroying namespace "webhook-4246-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.712 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":68,"skipped":1005,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:33:01.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2780 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2780 STEP: creating replication controller externalsvc in namespace services-2780 I0509 21:33:01.572585 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2780, replica count: 2 I0509 21:33:04.623031 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:33:07.623307 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 9 21:33:07.792: INFO: Creating new exec pod May 9 21:33:11.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2780 execpodz5wq5 -- /bin/sh -x -c nslookup nodeport-service' May 9 21:33:12.131: INFO: stderr: "I0509 21:33:12.015795 1650 log.go:172] (0xc0009920b0) (0xc0006e6500) Create stream\nI0509 21:33:12.015854 1650 log.go:172] (0xc0009920b0) (0xc0006e6500) Stream added, broadcasting: 1\nI0509 21:33:12.018698 1650 log.go:172] (0xc0009920b0) Reply frame received for 1\nI0509 21:33:12.018759 1650 log.go:172] (0xc0009920b0) (0xc0006b59a0) Create stream\nI0509 21:33:12.018774 1650 log.go:172] (0xc0009920b0) (0xc0006b59a0) Stream added, broadcasting: 3\nI0509 21:33:12.019762 1650 log.go:172] (0xc0009920b0) Reply frame received for 3\nI0509 21:33:12.019813 1650 log.go:172] (0xc0009920b0) (0xc0006e65a0) Create stream\nI0509 21:33:12.019836 1650 log.go:172] (0xc0009920b0) (0xc0006e65a0) Stream added, broadcasting: 5\nI0509 21:33:12.020864 1650 log.go:172] (0xc0009920b0) Reply frame received for 5\nI0509 21:33:12.118097 1650 log.go:172] (0xc0009920b0) Data frame received for 5\nI0509 21:33:12.118124 1650 log.go:172] (0xc0006e65a0) (5) Data frame handling\nI0509 21:33:12.118142 1650 log.go:172] (0xc0006e65a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0509 21:33:12.122460 1650 log.go:172] (0xc0009920b0) Data frame received for 3\nI0509 21:33:12.122482 1650 log.go:172] (0xc0006b59a0) (3) Data frame handling\nI0509 21:33:12.122502 1650 log.go:172] (0xc0006b59a0) (3) Data frame sent\nI0509 21:33:12.123274 1650 log.go:172] (0xc0009920b0) Data frame received for 3\nI0509 21:33:12.123301 1650 log.go:172] (0xc0006b59a0) (3) Data frame handling\nI0509 21:33:12.123329 1650 log.go:172] (0xc0006b59a0) (3) Data frame sent\nI0509 21:33:12.123738 1650 log.go:172] (0xc0009920b0) Data frame received for 3\nI0509 21:33:12.123755 1650 log.go:172] (0xc0006b59a0) (3) Data frame handling\nI0509 21:33:12.123838 1650 log.go:172] (0xc0009920b0) Data frame received for 5\nI0509 21:33:12.123854 1650 log.go:172] (0xc0006e65a0) (5) Data frame handling\nI0509 21:33:12.125652 1650 log.go:172] (0xc0009920b0) Data frame received for 1\nI0509 21:33:12.125675 1650 log.go:172] (0xc0006e6500) (1) Data frame handling\nI0509 21:33:12.125696 1650 log.go:172] (0xc0006e6500) (1) Data frame sent\nI0509 21:33:12.125729 1650 log.go:172] (0xc0009920b0) (0xc0006e6500) Stream removed, broadcasting: 1\nI0509 21:33:12.125806 1650 log.go:172] (0xc0009920b0) Go away received\nI0509 21:33:12.126779 1650 log.go:172] (0xc0009920b0) (0xc0006e6500) Stream removed, broadcasting: 1\nI0509 21:33:12.126817 1650 log.go:172] (0xc0009920b0) (0xc0006b59a0) Stream removed, broadcasting: 3\nI0509 21:33:12.126834 1650 log.go:172] (0xc0009920b0) (0xc0006e65a0) Stream removed, broadcasting: 5\n" May 9 21:33:12.131: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2780.svc.cluster.local\tcanonical name = externalsvc.services-2780.svc.cluster.local.\nName:\texternalsvc.services-2780.svc.cluster.local\nAddress: 10.103.203.47\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2780, will wait for the garbage collector to delete the pods May 9 21:33:12.191: INFO: Deleting ReplicationController externalsvc took: 6.402381ms May 9 21:33:12.291: INFO: Terminating ReplicationController externalsvc pods took: 100.22971ms May 9 21:33:19.653: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:33:19.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2780" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.255 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":69,"skipped":1013,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:33:19.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:33:20.346: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:33:22.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:33:24.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:33:27.426: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:33:27.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6253" for this suite. STEP: Destroying namespace "webhook-6253-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.060 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":70,"skipped":1023,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:33:27.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 9 21:33:27.829: INFO: Waiting up to 5m0s for pod "pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1" in namespace "emptydir-7029" to be "success or failure" May 9 21:33:27.839: INFO: Pod "pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016103ms May 9 21:33:29.843: INFO: Pod "pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014315908s May 9 21:33:31.862: INFO: Pod "pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033431965s STEP: Saw pod success May 9 21:33:31.863: INFO: Pod "pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1" satisfied condition "success or failure" May 9 21:33:31.865: INFO: Trying to get logs from node jerma-worker2 pod pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1 container test-container: STEP: delete the pod May 9 21:33:31.923: INFO: Waiting for pod pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1 to disappear May 9 21:33:31.929: INFO: Pod pod-b10fba69-6a7c-41c6-8749-09951a0ea8b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:33:31.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7029" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:33:31.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 9 21:33:32.051: INFO: Waiting up to 5m0s for pod "pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed" in namespace "emptydir-8716" to be "success or failure" May 9 21:33:32.061: INFO: Pod "pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.741765ms May 9 21:33:34.065: INFO: Pod "pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013766256s May 9 21:33:36.069: INFO: Pod "pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017462679s STEP: Saw pod success May 9 21:33:36.069: INFO: Pod "pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed" satisfied condition "success or failure" May 9 21:33:36.071: INFO: Trying to get logs from node jerma-worker pod pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed container test-container: STEP: delete the pod May 9 21:33:36.092: INFO: Waiting for pod pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed to disappear May 9 21:33:36.116: INFO: Pod pod-ae8f65bb-9a83-4476-b7fb-f9243e6e94ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:33:36.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8716" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:33:36.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 9 21:33:36.221: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:36.226: INFO: Number of nodes with available pods: 0 May 9 21:33:36.226: INFO: Node jerma-worker is running more than one daemon pod May 9 21:33:37.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:37.305: INFO: Number of nodes with available pods: 0 May 9 21:33:37.305: INFO: Node jerma-worker is running more than one daemon pod May 9 21:33:38.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:38.232: INFO: Number of nodes with available pods: 0 May 9 21:33:38.232: INFO: Node jerma-worker is running more than one daemon pod May 9 21:33:39.235: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:39.269: INFO: Number of nodes with available pods: 0 May 9 21:33:39.269: INFO: Node jerma-worker is running more than one daemon pod May 9 21:33:40.231: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:40.235: INFO: Number of nodes with available pods: 1 May 9 21:33:40.235: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:41.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:41.239: INFO: Number of nodes with available pods: 2 May 9 21:33:41.239: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 9 21:33:41.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:41.329: INFO: Number of nodes with available pods: 1 May 9 21:33:41.329: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:42.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:42.338: INFO: Number of nodes with available pods: 1 May 9 21:33:42.338: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:43.333: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:43.341: INFO: Number of nodes with available pods: 1 May 9 21:33:43.341: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:44.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:44.338: INFO: Number of nodes with available pods: 1 May 9 21:33:44.338: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:45.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:45.338: INFO: Number of nodes with available pods: 1 May 9 21:33:45.338: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:46.343: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:46.346: INFO: Number of nodes with available pods: 1 May 9 21:33:46.346: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:47.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:47.341: INFO: Number of nodes with available pods: 1 May 9 21:33:47.341: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:48.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:48.338: INFO: Number of nodes with available pods: 1 May 9 21:33:48.338: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:49.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:49.337: INFO: Number of nodes with available pods: 1 May 9 21:33:49.337: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:50.333: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:50.336: INFO: Number of nodes with available pods: 1 May 9 21:33:50.336: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:51.333: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:51.336: INFO: Number of nodes with available pods: 1 May 9 21:33:51.336: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:52.333: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:52.349: INFO: Number of nodes with available pods: 1 May 9 21:33:52.349: INFO: Node jerma-worker2 is running more than one daemon pod May 9 21:33:53.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:33:53.338: INFO: Number of nodes with available pods: 2 May 9 21:33:53.338: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7968, will wait for the garbage collector to delete the pods May 9 21:33:53.402: INFO: Deleting DaemonSet.extensions daemon-set took: 7.718369ms May 9 21:33:53.502: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.198037ms May 9 21:33:59.618: INFO: Number of nodes with available pods: 0 May 9 21:33:59.618: INFO: Number of running nodes: 0, number of available pods: 0 May 9 21:33:59.622: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7968/daemonsets","resourceVersion":"14800432"},"items":null} May 9 21:33:59.636: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7968/pods","resourceVersion":"14800433"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:33:59.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7968" for this suite. • [SLOW TEST:23.524 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":73,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:33:59.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e571605c-1385-4583-a0a3-c9b8dfb76cfb STEP: Creating a pod to test consume configMaps May 9 21:33:59.740: INFO: Waiting up to 5m0s for pod "pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac" in namespace "configmap-1045" to be "success or failure" May 9 21:33:59.756: INFO: Pod "pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac": Phase="Pending", Reason="", readiness=false. Elapsed: 15.810399ms May 9 21:34:01.768: INFO: Pod "pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028036471s May 9 21:34:03.773: INFO: Pod "pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032552738s STEP: Saw pod success May 9 21:34:03.773: INFO: Pod "pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac" satisfied condition "success or failure" May 9 21:34:03.776: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac container configmap-volume-test: STEP: delete the pod May 9 21:34:03.849: INFO: Waiting for pod pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac to disappear May 9 21:34:03.869: INFO: Pod pod-configmaps-97df8734-f60b-44de-87e9-4bade8d422ac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:34:03.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1045" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1155,"failed":0} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:34:03.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-15458951-7830-4fb4-aba7-13224fc17e53 STEP: Creating configMap with name cm-test-opt-upd-5f8fe796-d112-4c3b-aa14-acfac0c2b03f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-15458951-7830-4fb4-aba7-13224fc17e53 STEP: Updating configmap cm-test-opt-upd-5f8fe796-d112-4c3b-aa14-acfac0c2b03f STEP: Creating configMap with name cm-test-opt-create-8dbce635-6c0f-4be4-9ea9-cec7fe3ff269 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:34:12.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2327" for this suite. • [SLOW TEST:8.281 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1156,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:34:12.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:34:12.591: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:34:14.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724656852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:34:17.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:34:20.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9253" for this suite. STEP: Destroying namespace "webhook-9253-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.814 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":76,"skipped":1159,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:34:20.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5082 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 9 21:34:21.143: INFO: Found 0 stateful pods, waiting for 3 May 9 21:34:31.148: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 21:34:31.148: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 21:34:31.148: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 9 21:34:41.148: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 21:34:41.148: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 21:34:41.148: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 9 21:34:41.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5082 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:34:41.412: INFO: stderr: "I0509 21:34:41.302196 1670 log.go:172] (0xc000a24a50) (0xc0009ae0a0) Create stream\nI0509 21:34:41.302250 1670 log.go:172] (0xc000a24a50) (0xc0009ae0a0) Stream added, broadcasting: 1\nI0509 21:34:41.304869 1670 log.go:172] (0xc000a24a50) Reply frame received for 1\nI0509 21:34:41.304920 1670 log.go:172] (0xc000a24a50) (0xc0009ae140) Create stream\nI0509 21:34:41.304943 1670 log.go:172] (0xc000a24a50) (0xc0009ae140) Stream added, broadcasting: 3\nI0509 21:34:41.306527 1670 log.go:172] (0xc000a24a50) Reply frame received for 3\nI0509 21:34:41.306566 1670 log.go:172] (0xc000a24a50) (0xc0009ae280) Create stream\nI0509 21:34:41.306576 1670 log.go:172] (0xc000a24a50) (0xc0009ae280) Stream added, broadcasting: 5\nI0509 21:34:41.307649 1670 log.go:172] (0xc000a24a50) Reply frame received for 5\nI0509 21:34:41.379785 1670 log.go:172] (0xc000a24a50) Data frame received for 5\nI0509 21:34:41.379818 1670 log.go:172] (0xc0009ae280) (5) Data frame handling\nI0509 21:34:41.379839 1670 log.go:172] (0xc0009ae280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:34:41.404980 1670 log.go:172] (0xc000a24a50) Data frame received for 3\nI0509 21:34:41.405003 1670 log.go:172] (0xc0009ae140) (3) Data frame handling\nI0509 21:34:41.405012 1670 log.go:172] (0xc0009ae140) (3) Data frame sent\nI0509 21:34:41.405019 1670 log.go:172] (0xc000a24a50) Data frame received for 3\nI0509 21:34:41.405025 1670 log.go:172] (0xc0009ae140) (3) Data frame handling\nI0509 21:34:41.405065 1670 log.go:172] (0xc000a24a50) Data frame received for 5\nI0509 21:34:41.405088 1670 log.go:172] (0xc0009ae280) (5) Data frame handling\nI0509 21:34:41.407329 1670 log.go:172] (0xc000a24a50) Data frame received for 1\nI0509 21:34:41.407355 1670 log.go:172] (0xc0009ae0a0) (1) Data frame handling\nI0509 21:34:41.407369 1670 log.go:172] (0xc0009ae0a0) (1) Data frame sent\nI0509 21:34:41.407382 1670 log.go:172] (0xc000a24a50) (0xc0009ae0a0) Stream removed, broadcasting: 1\nI0509 21:34:41.407697 1670 log.go:172] (0xc000a24a50) (0xc0009ae0a0) Stream removed, broadcasting: 1\nI0509 21:34:41.407716 1670 log.go:172] (0xc000a24a50) (0xc0009ae140) Stream removed, broadcasting: 3\nI0509 21:34:41.407874 1670 log.go:172] (0xc000a24a50) Go away received\nI0509 21:34:41.407953 1670 log.go:172] (0xc000a24a50) (0xc0009ae280) Stream removed, broadcasting: 5\n" May 9 21:34:41.412: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:34:41.412: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 9 21:34:51.444: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 9 21:35:01.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5082 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:35:01.715: INFO: stderr: "I0509 21:35:01.601860 1690 log.go:172] (0xc00090aa50) (0xc0008f2280) Create stream\nI0509 21:35:01.601932 1690 log.go:172] (0xc00090aa50) (0xc0008f2280) Stream added, broadcasting: 1\nI0509 21:35:01.604963 1690 log.go:172] (0xc00090aa50) Reply frame received for 1\nI0509 21:35:01.605026 1690 log.go:172] (0xc00090aa50) (0xc0008f23c0) Create stream\nI0509 21:35:01.605053 1690 log.go:172] (0xc00090aa50) (0xc0008f23c0) Stream added, broadcasting: 3\nI0509 21:35:01.606094 1690 log.go:172] (0xc00090aa50) Reply frame received for 3\nI0509 21:35:01.606141 1690 log.go:172] (0xc00090aa50) (0xc0002ed540) Create stream\nI0509 21:35:01.606154 1690 log.go:172] (0xc00090aa50) (0xc0002ed540) Stream added, broadcasting: 5\nI0509 21:35:01.607107 1690 log.go:172] (0xc00090aa50) Reply frame received for 5\nI0509 21:35:01.709505 1690 log.go:172] (0xc00090aa50) Data frame received for 3\nI0509 21:35:01.709537 1690 log.go:172] (0xc0008f23c0) (3) Data frame handling\nI0509 21:35:01.709554 1690 log.go:172] (0xc0008f23c0) (3) Data frame sent\nI0509 21:35:01.709648 1690 log.go:172] (0xc00090aa50) Data frame received for 5\nI0509 21:35:01.709691 1690 log.go:172] (0xc00090aa50) Data frame received for 3\nI0509 21:35:01.709736 1690 log.go:172] (0xc0008f23c0) (3) Data frame handling\nI0509 21:35:01.709770 1690 log.go:172] (0xc0002ed540) (5) Data frame handling\nI0509 21:35:01.709786 1690 log.go:172] (0xc0002ed540) (5) Data frame sent\nI0509 21:35:01.709798 1690 log.go:172] (0xc00090aa50) Data frame received for 5\nI0509 21:35:01.709809 1690 log.go:172] (0xc0002ed540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0509 21:35:01.711080 1690 log.go:172] (0xc00090aa50) Data frame received for 1\nI0509 21:35:01.711097 1690 log.go:172] (0xc0008f2280) (1) Data frame handling\nI0509 21:35:01.711114 1690 log.go:172] (0xc0008f2280) (1) Data frame sent\nI0509 21:35:01.711128 1690 log.go:172] (0xc00090aa50) (0xc0008f2280) Stream removed, broadcasting: 1\nI0509 21:35:01.711446 1690 log.go:172] (0xc00090aa50) (0xc0008f2280) Stream removed, broadcasting: 1\nI0509 21:35:01.711471 1690 log.go:172] (0xc00090aa50) (0xc0008f23c0) Stream removed, broadcasting: 3\nI0509 21:35:01.711495 1690 log.go:172] (0xc00090aa50) (0xc0002ed540) Stream removed, broadcasting: 5\n" May 9 21:35:01.716: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:35:01.716: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:35:11.737: INFO: Waiting for StatefulSet statefulset-5082/ss2 to complete update May 9 21:35:11.737: INFO: Waiting for Pod statefulset-5082/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 9 21:35:11.737: INFO: Waiting for Pod statefulset-5082/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 9 21:35:21.782: INFO: Waiting for StatefulSet statefulset-5082/ss2 to complete update STEP: Rolling back to a previous revision May 9 21:35:31.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5082 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:35:32.038: INFO: stderr: "I0509 21:35:31.881405 1710 log.go:172] (0xc000ac8000) (0xc000aae000) Create stream\nI0509 21:35:31.881475 1710 log.go:172] (0xc000ac8000) (0xc000aae000) Stream added, broadcasting: 1\nI0509 21:35:31.883980 1710 log.go:172] (0xc000ac8000) Reply frame received for 1\nI0509 21:35:31.884016 1710 log.go:172] (0xc000ac8000) (0xc000ad2000) Create stream\nI0509 21:35:31.884024 1710 log.go:172] (0xc000ac8000) (0xc000ad2000) Stream added, broadcasting: 3\nI0509 21:35:31.884837 1710 log.go:172] (0xc000ac8000) Reply frame received for 3\nI0509 21:35:31.884868 1710 log.go:172] (0xc000ac8000) (0xc0006d5c20) Create stream\nI0509 21:35:31.884878 1710 log.go:172] (0xc000ac8000) (0xc0006d5c20) Stream added, broadcasting: 5\nI0509 21:35:31.885937 1710 log.go:172] (0xc000ac8000) Reply frame received for 5\nI0509 21:35:31.986198 1710 log.go:172] (0xc000ac8000) Data frame received for 5\nI0509 21:35:31.986228 1710 log.go:172] (0xc0006d5c20) (5) Data frame handling\nI0509 21:35:31.986248 1710 log.go:172] (0xc0006d5c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:35:32.031230 1710 log.go:172] (0xc000ac8000) Data frame received for 3\nI0509 21:35:32.031261 1710 log.go:172] (0xc000ad2000) (3) Data frame handling\nI0509 21:35:32.031272 1710 log.go:172] (0xc000ad2000) (3) Data frame sent\nI0509 21:35:32.031569 1710 log.go:172] (0xc000ac8000) Data frame received for 5\nI0509 21:35:32.031621 1710 log.go:172] (0xc0006d5c20) (5) Data frame handling\nI0509 21:35:32.031653 1710 log.go:172] (0xc000ac8000) Data frame received for 3\nI0509 21:35:32.031671 1710 log.go:172] (0xc000ad2000) (3) Data frame handling\nI0509 21:35:32.033456 1710 log.go:172] (0xc000ac8000) Data frame received for 1\nI0509 21:35:32.033481 1710 log.go:172] (0xc000aae000) (1) Data frame handling\nI0509 21:35:32.033523 1710 log.go:172] (0xc000aae000) (1) Data frame sent\nI0509 21:35:32.033552 1710 log.go:172] (0xc000ac8000) (0xc000aae000) Stream removed, broadcasting: 1\nI0509 21:35:32.033590 1710 log.go:172] (0xc000ac8000) Go away received\nI0509 21:35:32.033992 1710 log.go:172] (0xc000ac8000) (0xc000aae000) Stream removed, broadcasting: 1\nI0509 21:35:32.034011 1710 log.go:172] (0xc000ac8000) (0xc000ad2000) Stream removed, broadcasting: 3\nI0509 21:35:32.034022 1710 log.go:172] (0xc000ac8000) (0xc0006d5c20) Stream removed, broadcasting: 5\n" May 9 21:35:32.038: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:35:32.038: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:35:42.071: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 9 21:35:52.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5082 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:35:54.361: INFO: stderr: "I0509 21:35:54.267861 1731 log.go:172] (0xc00011a370) (0xc00071fcc0) Create stream\nI0509 21:35:54.267899 1731 log.go:172] (0xc00011a370) (0xc00071fcc0) Stream added, broadcasting: 1\nI0509 21:35:54.270981 1731 log.go:172] (0xc00011a370) Reply frame received for 1\nI0509 21:35:54.271043 1731 log.go:172] (0xc00011a370) (0xc000742000) Create stream\nI0509 21:35:54.271054 1731 log.go:172] (0xc00011a370) (0xc000742000) Stream added, broadcasting: 3\nI0509 21:35:54.272067 1731 log.go:172] (0xc00011a370) Reply frame received for 3\nI0509 21:35:54.272108 1731 log.go:172] (0xc00011a370) (0xc000778000) Create stream\nI0509 21:35:54.272124 1731 log.go:172] (0xc00011a370) (0xc000778000) Stream added, broadcasting: 5\nI0509 21:35:54.272949 1731 log.go:172] (0xc00011a370) Reply frame received for 5\nI0509 21:35:54.354246 1731 log.go:172] (0xc00011a370) Data frame received for 3\nI0509 21:35:54.354284 1731 log.go:172] (0xc000742000) (3) Data frame handling\nI0509 21:35:54.354296 1731 log.go:172] (0xc000742000) (3) Data frame sent\nI0509 21:35:54.354305 1731 log.go:172] (0xc00011a370) Data frame received for 3\nI0509 21:35:54.354312 1731 log.go:172] (0xc000742000) (3) Data frame handling\nI0509 21:35:54.354324 1731 log.go:172] (0xc00011a370) Data frame received for 5\nI0509 21:35:54.354331 1731 log.go:172] (0xc000778000) (5) Data frame handling\nI0509 21:35:54.354339 1731 log.go:172] (0xc000778000) (5) Data frame sent\nI0509 21:35:54.354347 1731 log.go:172] (0xc00011a370) Data frame received for 5\nI0509 21:35:54.354353 1731 log.go:172] (0xc000778000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0509 21:35:54.355348 1731 log.go:172] (0xc00011a370) Data frame received for 1\nI0509 21:35:54.355361 1731 log.go:172] (0xc00071fcc0) (1) Data frame handling\nI0509 21:35:54.355371 1731 log.go:172] (0xc00071fcc0) (1) Data frame sent\nI0509 21:35:54.355384 1731 log.go:172] (0xc00011a370) (0xc00071fcc0) Stream removed, broadcasting: 1\nI0509 21:35:54.355637 1731 log.go:172] (0xc00011a370) Go away received\nI0509 21:35:54.355672 1731 log.go:172] (0xc00011a370) (0xc00071fcc0) Stream removed, broadcasting: 1\nI0509 21:35:54.355693 1731 log.go:172] (0xc00011a370) (0xc000742000) Stream removed, broadcasting: 3\nI0509 21:35:54.355710 1731 log.go:172] (0xc00011a370) (0xc000778000) Stream removed, broadcasting: 5\n" May 9 21:35:54.361: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:35:54.361: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:36:14.382: INFO: Waiting for StatefulSet statefulset-5082/ss2 to complete update May 9 21:36:14.382: INFO: Waiting for Pod statefulset-5082/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 9 21:36:24.391: INFO: Waiting for StatefulSet statefulset-5082/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 9 21:36:34.391: INFO: Deleting all statefulset in ns statefulset-5082 May 9 21:36:34.394: INFO: Scaling statefulset ss2 to 0 May 9 21:36:54.411: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:36:54.415: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:36:54.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5082" for this suite. • [SLOW TEST:153.459 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":77,"skipped":1165,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:36:54.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2d22c07d-c4f1-4fad-8f17-2a0201d8111e STEP: Creating a pod to test consume configMaps May 9 21:36:54.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577" in namespace "projected-6564" to be "success or failure" May 9 21:36:54.533: INFO: Pod "pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577": Phase="Pending", Reason="", readiness=false. Elapsed: 13.132816ms May 9 21:36:56.536: INFO: Pod "pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016706026s May 9 21:36:58.541: INFO: Pod "pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021180022s STEP: Saw pod success May 9 21:36:58.541: INFO: Pod "pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577" satisfied condition "success or failure" May 9 21:36:58.544: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577 container projected-configmap-volume-test: STEP: delete the pod May 9 21:36:58.576: INFO: Waiting for pod pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577 to disappear May 9 21:36:58.632: INFO: Pod pod-projected-configmaps-7c587851-bb07-4a0a-98cf-b044e2ef4577 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:36:58.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6564" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1183,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:36:58.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:36:58.737: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 9 21:37:00.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 create -f -' May 9 21:37:04.450: INFO: stderr: "" May 9 21:37:04.450: INFO: stdout: "e2e-test-crd-publish-openapi-5314-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 9 21:37:04.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 delete e2e-test-crd-publish-openapi-5314-crds test-foo' May 9 21:37:04.574: INFO: stderr: "" May 9 21:37:04.574: INFO: stdout: "e2e-test-crd-publish-openapi-5314-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 9 21:37:04.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 apply -f -' May 9 21:37:04.832: INFO: stderr: "" May 9 21:37:04.832: INFO: stdout: "e2e-test-crd-publish-openapi-5314-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 9 21:37:04.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 delete e2e-test-crd-publish-openapi-5314-crds test-foo' May 9 21:37:04.926: INFO: stderr: "" May 9 21:37:04.926: INFO: stdout: "e2e-test-crd-publish-openapi-5314-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 9 21:37:04.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 create -f -' May 9 21:37:05.156: INFO: rc: 1 May 9 21:37:05.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 apply -f -' May 9 21:37:05.409: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 9 21:37:05.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 create -f -' May 9 21:37:05.655: INFO: rc: 1 May 9 21:37:05.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1409 apply -f -' May 9 21:37:05.904: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 9 21:37:05.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5314-crds' May 9 21:37:06.159: INFO: stderr: "" May 9 21:37:06.159: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5314-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 9 21:37:06.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5314-crds.metadata' May 9 21:37:06.405: INFO: stderr: "" May 9 21:37:06.405: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5314-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 9 21:37:06.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5314-crds.spec' May 9 21:37:06.643: INFO: stderr: "" May 9 21:37:06.643: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5314-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 9 21:37:06.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5314-crds.spec.bars' May 9 21:37:06.941: INFO: stderr: "" May 9 21:37:06.941: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5314-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 9 21:37:06.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5314-crds.spec.bars2' May 9 21:37:07.166: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:37:09.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1409" for this suite. • [SLOW TEST:10.421 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":79,"skipped":1189,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:37:09.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5357.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5357.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 21:37:15.230: INFO: DNS probes using dns-5357/dns-test-9d0edd6e-8496-4a40-97fc-a33fa7bf0c38 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:37:15.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5357" for this suite. • [SLOW TEST:6.233 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":80,"skipped":1207,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:37:15.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693 May 9 21:37:15.372: INFO: Pod name my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693: Found 0 pods out of 1 May 9 21:37:20.377: INFO: Pod name my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693: Found 1 pods out of 1 May 9 21:37:20.377: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693" are running May 9 21:37:20.402: INFO: Pod "my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693-pff7h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 21:37:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 21:37:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 21:37:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 21:37:15 +0000 UTC Reason: Message:}]) May 9 21:37:20.402: INFO: Trying to dial the pod May 9 21:37:25.414: INFO: Controller my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693: Got expected result from replica 1 [my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693-pff7h]: "my-hostname-basic-37712caf-4159-45c2-bcf5-7f5064098693-pff7h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:37:25.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1082" for this suite. • [SLOW TEST:10.129 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":81,"skipped":1221,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:37:25.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 9 21:37:25.503: INFO: Waiting up to 5m0s for pod "downward-api-6e062147-231c-4213-ba28-eaf51256ee8b" in namespace "downward-api-5110" to be "success or failure" May 9 21:37:25.510: INFO: Pod "downward-api-6e062147-231c-4213-ba28-eaf51256ee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.918997ms May 9 21:37:27.513: INFO: Pod "downward-api-6e062147-231c-4213-ba28-eaf51256ee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010644928s May 9 21:37:29.518: INFO: Pod "downward-api-6e062147-231c-4213-ba28-eaf51256ee8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015266955s STEP: Saw pod success May 9 21:37:29.518: INFO: Pod "downward-api-6e062147-231c-4213-ba28-eaf51256ee8b" satisfied condition "success or failure" May 9 21:37:29.522: INFO: Trying to get logs from node jerma-worker pod downward-api-6e062147-231c-4213-ba28-eaf51256ee8b container dapi-container: STEP: delete the pod May 9 21:37:29.566: INFO: Waiting for pod downward-api-6e062147-231c-4213-ba28-eaf51256ee8b to disappear May 9 21:37:29.569: INFO: Pod downward-api-6e062147-231c-4213-ba28-eaf51256ee8b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:37:29.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5110" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:37:29.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:37:29.613: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7080 I0509 21:37:29.629732 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7080, replica count: 1 I0509 21:37:30.680179 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:37:31.680431 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:37:32.680626 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 9 21:37:32.892: INFO: Created: latency-svc-jw72d May 9 21:37:32.907: INFO: Got endpoints: latency-svc-jw72d [126.42169ms] May 9 21:37:33.022: INFO: Created: latency-svc-8s6dl May 9 21:37:33.074: INFO: Got endpoints: latency-svc-8s6dl [166.607409ms] May 9 21:37:33.074: INFO: Created: latency-svc-gzd6h May 9 21:37:33.109: INFO: Got endpoints: latency-svc-gzd6h [202.500236ms] May 9 21:37:33.278: INFO: Created: latency-svc-mz86b May 9 21:37:33.319: INFO: Got endpoints: latency-svc-mz86b [412.408218ms] May 9 21:37:33.376: INFO: Created: latency-svc-qblcm May 9 21:37:33.392: INFO: Got endpoints: latency-svc-qblcm [484.875949ms] May 9 21:37:33.416: INFO: Created: latency-svc-dzw57 May 9 21:37:33.428: INFO: Got endpoints: latency-svc-dzw57 [520.639491ms] May 9 21:37:33.449: INFO: Created: latency-svc-wzkcn May 9 21:37:33.525: INFO: Got endpoints: latency-svc-wzkcn [617.7933ms] May 9 21:37:33.538: INFO: Created: latency-svc-qndvb May 9 21:37:33.553: INFO: Got endpoints: latency-svc-qndvb [645.686152ms] May 9 21:37:33.601: INFO: Created: latency-svc-nbfxl May 9 21:37:33.615: INFO: Got endpoints: latency-svc-nbfxl [707.538593ms] May 9 21:37:33.687: INFO: Created: latency-svc-6757n May 9 21:37:33.702: INFO: Got endpoints: latency-svc-6757n [795.052247ms] May 9 21:37:33.743: INFO: Created: latency-svc-rfrqb May 9 21:37:33.759: INFO: Got endpoints: latency-svc-rfrqb [851.538066ms] May 9 21:37:33.784: INFO: Created: latency-svc-cwcpp May 9 21:37:33.824: INFO: Got endpoints: latency-svc-cwcpp [917.037082ms] May 9 21:37:33.838: INFO: Created: latency-svc-wk6db May 9 21:37:33.848: INFO: Got endpoints: latency-svc-wk6db [941.123784ms] May 9 21:37:33.869: INFO: Created: latency-svc-k76cl May 9 21:37:33.879: INFO: Got endpoints: latency-svc-k76cl [971.403969ms] May 9 21:37:33.901: INFO: Created: latency-svc-cz786 May 9 21:37:33.915: INFO: Got endpoints: latency-svc-cz786 [1.007381114s] May 9 21:37:33.968: INFO: Created: latency-svc-kqpgq May 9 21:37:33.999: INFO: Got endpoints: latency-svc-kqpgq [1.09210655s] May 9 21:37:34.027: INFO: Created: latency-svc-k5zhq May 9 21:37:34.054: INFO: Got endpoints: latency-svc-k5zhq [979.657718ms] May 9 21:37:34.094: INFO: Created: latency-svc-ps557 May 9 21:37:34.139: INFO: Created: latency-svc-qtslf May 9 21:37:34.139: INFO: Got endpoints: latency-svc-ps557 [1.02965259s] May 9 21:37:34.144: INFO: Got endpoints: latency-svc-qtslf [824.886171ms] May 9 21:37:34.171: INFO: Created: latency-svc-2fpl4 May 9 21:37:34.180: INFO: Got endpoints: latency-svc-2fpl4 [787.937086ms] May 9 21:37:34.227: INFO: Created: latency-svc-5vqql May 9 21:37:34.235: INFO: Got endpoints: latency-svc-5vqql [806.995559ms] May 9 21:37:34.256: INFO: Created: latency-svc-dlqsb May 9 21:37:34.277: INFO: Got endpoints: latency-svc-dlqsb [752.026329ms] May 9 21:37:34.306: INFO: Created: latency-svc-vcn7c May 9 21:37:34.357: INFO: Got endpoints: latency-svc-vcn7c [804.565481ms] May 9 21:37:34.366: INFO: Created: latency-svc-xcn9p May 9 21:37:34.381: INFO: Got endpoints: latency-svc-xcn9p [766.382298ms] May 9 21:37:34.408: INFO: Created: latency-svc-qmhwr May 9 21:37:34.423: INFO: Got endpoints: latency-svc-qmhwr [720.707005ms] May 9 21:37:34.454: INFO: Created: latency-svc-qkvjb May 9 21:37:34.495: INFO: Got endpoints: latency-svc-qkvjb [736.043598ms] May 9 21:37:34.507: INFO: Created: latency-svc-gq4tv May 9 21:37:34.520: INFO: Got endpoints: latency-svc-gq4tv [695.57085ms] May 9 21:37:34.538: INFO: Created: latency-svc-qb2qd May 9 21:37:34.551: INFO: Got endpoints: latency-svc-qb2qd [702.300805ms] May 9 21:37:34.570: INFO: Created: latency-svc-tklww May 9 21:37:34.587: INFO: Got endpoints: latency-svc-tklww [708.149896ms] May 9 21:37:34.626: INFO: Created: latency-svc-wcpjc May 9 21:37:34.644: INFO: Got endpoints: latency-svc-wcpjc [729.165205ms] May 9 21:37:34.669: INFO: Created: latency-svc-28pfl May 9 21:37:34.683: INFO: Got endpoints: latency-svc-28pfl [683.376943ms] May 9 21:37:34.706: INFO: Created: latency-svc-rwmbn May 9 21:37:34.720: INFO: Got endpoints: latency-svc-rwmbn [665.954037ms] May 9 21:37:34.777: INFO: Created: latency-svc-zdtk8 May 9 21:37:34.780: INFO: Got endpoints: latency-svc-zdtk8 [641.068086ms] May 9 21:37:34.822: INFO: Created: latency-svc-t5sr6 May 9 21:37:34.840: INFO: Got endpoints: latency-svc-t5sr6 [695.259275ms] May 9 21:37:34.933: INFO: Created: latency-svc-q8vht May 9 21:37:34.954: INFO: Got endpoints: latency-svc-q8vht [774.110012ms] May 9 21:37:34.975: INFO: Created: latency-svc-mtj57 May 9 21:37:34.990: INFO: Got endpoints: latency-svc-mtj57 [755.710534ms] May 9 21:37:35.019: INFO: Created: latency-svc-ksmvr May 9 21:37:35.069: INFO: Got endpoints: latency-svc-ksmvr [792.398736ms] May 9 21:37:35.110: INFO: Created: latency-svc-wr8gw May 9 21:37:35.161: INFO: Got endpoints: latency-svc-wr8gw [803.776642ms] May 9 21:37:35.215: INFO: Created: latency-svc-f2xw5 May 9 21:37:35.226: INFO: Got endpoints: latency-svc-f2xw5 [845.148906ms] May 9 21:37:35.257: INFO: Created: latency-svc-8lznc May 9 21:37:35.267: INFO: Got endpoints: latency-svc-8lznc [843.860564ms] May 9 21:37:35.290: INFO: Created: latency-svc-6lmfv May 9 21:37:35.333: INFO: Got endpoints: latency-svc-6lmfv [838.116628ms] May 9 21:37:35.350: INFO: Created: latency-svc-jcbkw May 9 21:37:35.366: INFO: Got endpoints: latency-svc-jcbkw [846.664629ms] May 9 21:37:35.413: INFO: Created: latency-svc-w5txq May 9 21:37:35.425: INFO: Got endpoints: latency-svc-w5txq [873.940368ms] May 9 21:37:35.471: INFO: Created: latency-svc-8hc6p May 9 21:37:35.488: INFO: Got endpoints: latency-svc-8hc6p [901.027544ms] May 9 21:37:35.518: INFO: Created: latency-svc-872gj May 9 21:37:35.534: INFO: Got endpoints: latency-svc-872gj [890.179056ms] May 9 21:37:35.557: INFO: Created: latency-svc-kz6ss May 9 21:37:35.569: INFO: Got endpoints: latency-svc-kz6ss [886.556252ms] May 9 21:37:35.635: INFO: Created: latency-svc-gf75b May 9 21:37:35.648: INFO: Got endpoints: latency-svc-gf75b [928.314216ms] May 9 21:37:35.692: INFO: Created: latency-svc-vv4n8 May 9 21:37:35.750: INFO: Got endpoints: latency-svc-vv4n8 [969.553621ms] May 9 21:37:35.833: INFO: Created: latency-svc-j2zpb May 9 21:37:35.897: INFO: Got endpoints: latency-svc-j2zpb [1.057200177s] May 9 21:37:35.911: INFO: Created: latency-svc-77sx6 May 9 21:37:35.938: INFO: Got endpoints: latency-svc-77sx6 [983.72091ms] May 9 21:37:36.036: INFO: Created: latency-svc-lw7ht May 9 21:37:36.039: INFO: Got endpoints: latency-svc-lw7ht [1.048866417s] May 9 21:37:36.064: INFO: Created: latency-svc-qp95m May 9 21:37:36.073: INFO: Got endpoints: latency-svc-qp95m [1.003716596s] May 9 21:37:36.091: INFO: Created: latency-svc-xsxgf May 9 21:37:36.103: INFO: Got endpoints: latency-svc-xsxgf [941.849555ms] May 9 21:37:36.178: INFO: Created: latency-svc-jdq99 May 9 21:37:36.182: INFO: Got endpoints: latency-svc-jdq99 [955.416287ms] May 9 21:37:36.222: INFO: Created: latency-svc-lxf84 May 9 21:37:36.236: INFO: Got endpoints: latency-svc-lxf84 [968.96851ms] May 9 21:37:36.255: INFO: Created: latency-svc-zm9dk May 9 21:37:36.267: INFO: Got endpoints: latency-svc-zm9dk [934.265871ms] May 9 21:37:36.328: INFO: Created: latency-svc-qkfls May 9 21:37:36.333: INFO: Got endpoints: latency-svc-qkfls [966.479596ms] May 9 21:37:36.397: INFO: Created: latency-svc-8tt78 May 9 21:37:36.421: INFO: Got endpoints: latency-svc-8tt78 [995.922788ms] May 9 21:37:36.505: INFO: Created: latency-svc-wkghf May 9 21:37:36.519: INFO: Got endpoints: latency-svc-wkghf [1.031209515s] May 9 21:37:36.567: INFO: Created: latency-svc-n56z2 May 9 21:37:36.579: INFO: Got endpoints: latency-svc-n56z2 [1.044858751s] May 9 21:37:36.639: INFO: Created: latency-svc-65qtd May 9 21:37:36.651: INFO: Got endpoints: latency-svc-65qtd [1.081890026s] May 9 21:37:36.690: INFO: Created: latency-svc-tfwrq May 9 21:37:36.706: INFO: Got endpoints: latency-svc-tfwrq [1.057815725s] May 9 21:37:36.758: INFO: Created: latency-svc-zh7kt May 9 21:37:36.763: INFO: Got endpoints: latency-svc-zh7kt [1.01260357s] May 9 21:37:36.789: INFO: Created: latency-svc-w2xqv May 9 21:37:36.803: INFO: Got endpoints: latency-svc-w2xqv [906.029672ms] May 9 21:37:36.837: INFO: Created: latency-svc-zn66k May 9 21:37:36.926: INFO: Got endpoints: latency-svc-zn66k [987.84227ms] May 9 21:37:36.948: INFO: Created: latency-svc-9cl78 May 9 21:37:36.965: INFO: Got endpoints: latency-svc-9cl78 [926.021921ms] May 9 21:37:37.005: INFO: Created: latency-svc-86m24 May 9 21:37:37.019: INFO: Got endpoints: latency-svc-86m24 [945.562435ms] May 9 21:37:37.112: INFO: Created: latency-svc-kjg7b May 9 21:37:37.154: INFO: Got endpoints: latency-svc-kjg7b [1.051114667s] May 9 21:37:37.182: INFO: Created: latency-svc-jjbtz May 9 21:37:37.200: INFO: Got endpoints: latency-svc-jjbtz [1.018010216s] May 9 21:37:37.287: INFO: Created: latency-svc-292sq May 9 21:37:37.290: INFO: Got endpoints: latency-svc-292sq [1.054100447s] May 9 21:37:37.335: INFO: Created: latency-svc-2p6pz May 9 21:37:37.368: INFO: Got endpoints: latency-svc-2p6pz [1.10094266s] May 9 21:37:37.416: INFO: Created: latency-svc-fsb4x May 9 21:37:37.428: INFO: Got endpoints: latency-svc-fsb4x [1.095313097s] May 9 21:37:37.452: INFO: Created: latency-svc-xqb7n May 9 21:37:37.464: INFO: Got endpoints: latency-svc-xqb7n [1.043766024s] May 9 21:37:37.485: INFO: Created: latency-svc-dbc6c May 9 21:37:37.549: INFO: Got endpoints: latency-svc-dbc6c [1.030022343s] May 9 21:37:37.569: INFO: Created: latency-svc-qpzvq May 9 21:37:37.581: INFO: Got endpoints: latency-svc-qpzvq [1.002240952s] May 9 21:37:37.614: INFO: Created: latency-svc-f2wsh May 9 21:37:37.698: INFO: Got endpoints: latency-svc-f2wsh [1.047013437s] May 9 21:37:37.716: INFO: Created: latency-svc-cmz9l May 9 21:37:37.726: INFO: Got endpoints: latency-svc-cmz9l [1.019913363s] May 9 21:37:37.749: INFO: Created: latency-svc-8kscz May 9 21:37:37.762: INFO: Got endpoints: latency-svc-8kscz [999.488969ms] May 9 21:37:37.779: INFO: Created: latency-svc-665fq May 9 21:37:37.836: INFO: Got endpoints: latency-svc-665fq [1.032794488s] May 9 21:37:37.860: INFO: Created: latency-svc-4r472 May 9 21:37:37.879: INFO: Got endpoints: latency-svc-4r472 [953.284408ms] May 9 21:37:37.902: INFO: Created: latency-svc-4r9k6 May 9 21:37:37.913: INFO: Got endpoints: latency-svc-4r9k6 [947.403632ms] May 9 21:37:37.992: INFO: Created: latency-svc-zjjlw May 9 21:37:37.995: INFO: Got endpoints: latency-svc-zjjlw [976.509513ms] May 9 21:37:38.019: INFO: Created: latency-svc-ptfxk May 9 21:37:38.034: INFO: Got endpoints: latency-svc-ptfxk [879.221967ms] May 9 21:37:38.055: INFO: Created: latency-svc-ctlvr May 9 21:37:38.070: INFO: Got endpoints: latency-svc-ctlvr [870.296101ms] May 9 21:37:38.092: INFO: Created: latency-svc-vpkdm May 9 21:37:38.135: INFO: Got endpoints: latency-svc-vpkdm [845.361026ms] May 9 21:37:38.138: INFO: Created: latency-svc-xwm54 May 9 21:37:38.155: INFO: Got endpoints: latency-svc-xwm54 [786.40809ms] May 9 21:37:38.172: INFO: Created: latency-svc-cqnqp May 9 21:37:38.208: INFO: Got endpoints: latency-svc-cqnqp [779.638462ms] May 9 21:37:38.291: INFO: Created: latency-svc-th7nn May 9 21:37:38.305: INFO: Got endpoints: latency-svc-th7nn [840.980112ms] May 9 21:37:38.324: INFO: Created: latency-svc-bdclt May 9 21:37:38.335: INFO: Got endpoints: latency-svc-bdclt [785.954166ms] May 9 21:37:38.360: INFO: Created: latency-svc-g4p2j May 9 21:37:38.384: INFO: Got endpoints: latency-svc-g4p2j [802.787324ms] May 9 21:37:38.441: INFO: Created: latency-svc-qtqtm May 9 21:37:38.456: INFO: Got endpoints: latency-svc-qtqtm [757.815766ms] May 9 21:37:38.477: INFO: Created: latency-svc-92vqz May 9 21:37:38.486: INFO: Got endpoints: latency-svc-92vqz [760.301421ms] May 9 21:37:38.508: INFO: Created: latency-svc-gvnxx May 9 21:37:38.590: INFO: Got endpoints: latency-svc-gvnxx [827.9814ms] May 9 21:37:38.592: INFO: Created: latency-svc-bf8wf May 9 21:37:38.601: INFO: Got endpoints: latency-svc-bf8wf [765.244445ms] May 9 21:37:38.619: INFO: Created: latency-svc-gw7kd May 9 21:37:38.642: INFO: Got endpoints: latency-svc-gw7kd [763.078643ms] May 9 21:37:38.684: INFO: Created: latency-svc-l6768 May 9 21:37:38.716: INFO: Got endpoints: latency-svc-l6768 [803.4646ms] May 9 21:37:38.735: INFO: Created: latency-svc-ltlg8 May 9 21:37:38.751: INFO: Got endpoints: latency-svc-ltlg8 [756.078147ms] May 9 21:37:38.771: INFO: Created: latency-svc-cq6zm May 9 21:37:38.782: INFO: Got endpoints: latency-svc-cq6zm [748.531049ms] May 9 21:37:38.814: INFO: Created: latency-svc-s4cms May 9 21:37:38.854: INFO: Got endpoints: latency-svc-s4cms [784.263691ms] May 9 21:37:38.895: INFO: Created: latency-svc-jc57z May 9 21:37:38.930: INFO: Got endpoints: latency-svc-jc57z [794.448776ms] May 9 21:37:39.028: INFO: Created: latency-svc-7tjld May 9 21:37:39.071: INFO: Got endpoints: latency-svc-7tjld [916.88783ms] May 9 21:37:39.073: INFO: Created: latency-svc-xx6ls May 9 21:37:39.093: INFO: Got endpoints: latency-svc-xx6ls [885.054952ms] May 9 21:37:39.165: INFO: Created: latency-svc-v4h2t May 9 21:37:39.168: INFO: Got endpoints: latency-svc-v4h2t [862.705862ms] May 9 21:37:39.236: INFO: Created: latency-svc-dtrhr May 9 21:37:39.249: INFO: Got endpoints: latency-svc-dtrhr [914.11038ms] May 9 21:37:39.297: INFO: Created: latency-svc-jbh7s May 9 21:37:39.301: INFO: Got endpoints: latency-svc-jbh7s [916.481871ms] May 9 21:37:39.351: INFO: Created: latency-svc-q9dhx May 9 21:37:39.370: INFO: Got endpoints: latency-svc-q9dhx [913.571115ms] May 9 21:37:39.389: INFO: Created: latency-svc-w2j4p May 9 21:37:39.434: INFO: Got endpoints: latency-svc-w2j4p [948.396764ms] May 9 21:37:39.450: INFO: Created: latency-svc-t287q May 9 21:37:39.461: INFO: Got endpoints: latency-svc-t287q [871.078166ms] May 9 21:37:39.482: INFO: Created: latency-svc-rng84 May 9 21:37:39.497: INFO: Got endpoints: latency-svc-rng84 [896.04889ms] May 9 21:37:39.518: INFO: Created: latency-svc-z4m6w May 9 21:37:39.527: INFO: Got endpoints: latency-svc-z4m6w [884.487774ms] May 9 21:37:39.585: INFO: Created: latency-svc-prpwg May 9 21:37:39.605: INFO: Got endpoints: latency-svc-prpwg [888.478493ms] May 9 21:37:39.635: INFO: Created: latency-svc-wc4v5 May 9 21:37:39.659: INFO: Got endpoints: latency-svc-wc4v5 [907.597582ms] May 9 21:37:39.728: INFO: Created: latency-svc-65r5j May 9 21:37:39.739: INFO: Got endpoints: latency-svc-65r5j [956.659034ms] May 9 21:37:39.770: INFO: Created: latency-svc-jtzc7 May 9 21:37:39.782: INFO: Got endpoints: latency-svc-jtzc7 [928.105009ms] May 9 21:37:39.800: INFO: Created: latency-svc-qt5bc May 9 21:37:39.825: INFO: Got endpoints: latency-svc-qt5bc [895.320457ms] May 9 21:37:39.875: INFO: Created: latency-svc-wbhrl May 9 21:37:39.892: INFO: Got endpoints: latency-svc-wbhrl [820.806087ms] May 9 21:37:39.917: INFO: Created: latency-svc-6kxjm May 9 21:37:39.928: INFO: Got endpoints: latency-svc-6kxjm [834.552911ms] May 9 21:37:40.016: INFO: Created: latency-svc-mnr4m May 9 21:37:40.019: INFO: Got endpoints: latency-svc-mnr4m [850.867156ms] May 9 21:37:40.105: INFO: Created: latency-svc-drsdt May 9 21:37:40.141: INFO: Got endpoints: latency-svc-drsdt [892.026447ms] May 9 21:37:40.156: INFO: Created: latency-svc-68d7j May 9 21:37:40.169: INFO: Got endpoints: latency-svc-68d7j [867.720551ms] May 9 21:37:40.193: INFO: Created: latency-svc-lbft6 May 9 21:37:40.211: INFO: Got endpoints: latency-svc-lbft6 [840.820614ms] May 9 21:37:40.241: INFO: Created: latency-svc-6mdf8 May 9 21:37:40.279: INFO: Got endpoints: latency-svc-6mdf8 [844.621685ms] May 9 21:37:40.309: INFO: Created: latency-svc-md475 May 9 21:37:40.325: INFO: Got endpoints: latency-svc-md475 [864.038635ms] May 9 21:37:40.345: INFO: Created: latency-svc-lkqlh May 9 21:37:40.362: INFO: Got endpoints: latency-svc-lkqlh [864.525294ms] May 9 21:37:40.424: INFO: Created: latency-svc-jmwkj May 9 21:37:40.440: INFO: Got endpoints: latency-svc-jmwkj [913.270138ms] May 9 21:37:40.456: INFO: Created: latency-svc-srmlz May 9 21:37:40.470: INFO: Got endpoints: latency-svc-srmlz [865.610524ms] May 9 21:37:40.492: INFO: Created: latency-svc-kxh9x May 9 21:37:40.555: INFO: Got endpoints: latency-svc-kxh9x [895.486215ms] May 9 21:37:40.573: INFO: Created: latency-svc-wr64p May 9 21:37:40.598: INFO: Got endpoints: latency-svc-wr64p [858.602923ms] May 9 21:37:40.627: INFO: Created: latency-svc-4kqc8 May 9 21:37:40.651: INFO: Got endpoints: latency-svc-4kqc8 [868.539311ms] May 9 21:37:40.704: INFO: Created: latency-svc-f7znw May 9 21:37:40.710: INFO: Got endpoints: latency-svc-f7znw [884.690887ms] May 9 21:37:40.733: INFO: Created: latency-svc-d8wh8 May 9 21:37:40.742: INFO: Got endpoints: latency-svc-d8wh8 [849.244126ms] May 9 21:37:40.763: INFO: Created: latency-svc-lqmr4 May 9 21:37:40.798: INFO: Got endpoints: latency-svc-lqmr4 [870.080315ms] May 9 21:37:40.878: INFO: Created: latency-svc-mxbtj May 9 21:37:40.888: INFO: Got endpoints: latency-svc-mxbtj [869.306943ms] May 9 21:37:40.909: INFO: Created: latency-svc-ng772 May 9 21:37:40.923: INFO: Got endpoints: latency-svc-ng772 [781.165262ms] May 9 21:37:40.948: INFO: Created: latency-svc-ftkxl May 9 21:37:40.959: INFO: Got endpoints: latency-svc-ftkxl [790.484108ms] May 9 21:37:41.046: INFO: Created: latency-svc-mpqnl May 9 21:37:41.049: INFO: Got endpoints: latency-svc-mpqnl [837.996395ms] May 9 21:37:41.077: INFO: Created: latency-svc-cgxsl May 9 21:37:41.092: INFO: Got endpoints: latency-svc-cgxsl [812.682493ms] May 9 21:37:41.125: INFO: Created: latency-svc-fnddx May 9 21:37:41.189: INFO: Got endpoints: latency-svc-fnddx [864.081715ms] May 9 21:37:41.200: INFO: Created: latency-svc-fdzpv May 9 21:37:41.212: INFO: Got endpoints: latency-svc-fdzpv [850.637532ms] May 9 21:37:41.266: INFO: Created: latency-svc-fq97w May 9 21:37:41.315: INFO: Got endpoints: latency-svc-fq97w [875.366308ms] May 9 21:37:41.328: INFO: Created: latency-svc-qhb78 May 9 21:37:41.345: INFO: Got endpoints: latency-svc-qhb78 [874.280041ms] May 9 21:37:41.365: INFO: Created: latency-svc-8bm4r May 9 21:37:41.381: INFO: Got endpoints: latency-svc-8bm4r [826.809723ms] May 9 21:37:41.400: INFO: Created: latency-svc-74vb5 May 9 21:37:41.441: INFO: Got endpoints: latency-svc-74vb5 [842.937305ms] May 9 21:37:41.461: INFO: Created: latency-svc-c7rzl May 9 21:37:41.478: INFO: Got endpoints: latency-svc-c7rzl [826.627273ms] May 9 21:37:41.500: INFO: Created: latency-svc-q65tq May 9 21:37:41.516: INFO: Got endpoints: latency-svc-q65tq [806.259241ms] May 9 21:37:41.536: INFO: Created: latency-svc-btjdl May 9 21:37:41.578: INFO: Got endpoints: latency-svc-btjdl [836.710693ms] May 9 21:37:41.590: INFO: Created: latency-svc-d8nfn May 9 21:37:41.599: INFO: Got endpoints: latency-svc-d8nfn [801.131707ms] May 9 21:37:41.628: INFO: Created: latency-svc-j5287 May 9 21:37:41.641: INFO: Got endpoints: latency-svc-j5287 [752.598912ms] May 9 21:37:41.670: INFO: Created: latency-svc-4947j May 9 21:37:41.711: INFO: Got endpoints: latency-svc-4947j [788.309862ms] May 9 21:37:41.718: INFO: Created: latency-svc-gmfkv May 9 21:37:41.732: INFO: Got endpoints: latency-svc-gmfkv [772.575001ms] May 9 21:37:41.751: INFO: Created: latency-svc-qh4w6 May 9 21:37:41.774: INFO: Got endpoints: latency-svc-qh4w6 [725.606983ms] May 9 21:37:41.794: INFO: Created: latency-svc-qb29k May 9 21:37:41.806: INFO: Got endpoints: latency-svc-qb29k [713.610763ms] May 9 21:37:41.854: INFO: Created: latency-svc-6znzt May 9 21:37:41.871: INFO: Got endpoints: latency-svc-6znzt [681.212191ms] May 9 21:37:41.904: INFO: Created: latency-svc-ncfm2 May 9 21:37:41.919: INFO: Got endpoints: latency-svc-ncfm2 [706.347207ms] May 9 21:37:41.941: INFO: Created: latency-svc-rjv4f May 9 21:37:42.004: INFO: Got endpoints: latency-svc-rjv4f [688.597666ms] May 9 21:37:42.008: INFO: Created: latency-svc-ln5h7 May 9 21:37:42.046: INFO: Got endpoints: latency-svc-ln5h7 [701.245119ms] May 9 21:37:42.088: INFO: Created: latency-svc-kwrlh May 9 21:37:42.153: INFO: Got endpoints: latency-svc-kwrlh [771.973581ms] May 9 21:37:42.168: INFO: Created: latency-svc-wv45x May 9 21:37:42.186: INFO: Got endpoints: latency-svc-wv45x [744.989873ms] May 9 21:37:42.204: INFO: Created: latency-svc-lpd8h May 9 21:37:42.221: INFO: Got endpoints: latency-svc-lpd8h [742.728951ms] May 9 21:37:42.242: INFO: Created: latency-svc-6v2vq May 9 21:37:42.297: INFO: Got endpoints: latency-svc-6v2vq [780.674721ms] May 9 21:37:42.310: INFO: Created: latency-svc-p26gt May 9 21:37:42.323: INFO: Got endpoints: latency-svc-p26gt [744.353616ms] May 9 21:37:42.345: INFO: Created: latency-svc-qsl8n May 9 21:37:42.359: INFO: Got endpoints: latency-svc-qsl8n [760.464243ms] May 9 21:37:42.382: INFO: Created: latency-svc-7tkr9 May 9 21:37:42.396: INFO: Got endpoints: latency-svc-7tkr9 [754.535042ms] May 9 21:37:42.447: INFO: Created: latency-svc-6m2wq May 9 21:37:42.450: INFO: Got endpoints: latency-svc-6m2wq [739.200635ms] May 9 21:37:42.486: INFO: Created: latency-svc-npjnl May 9 21:37:42.511: INFO: Got endpoints: latency-svc-npjnl [779.384781ms] May 9 21:37:42.541: INFO: Created: latency-svc-8nwrv May 9 21:37:42.591: INFO: Got endpoints: latency-svc-8nwrv [816.790694ms] May 9 21:37:42.603: INFO: Created: latency-svc-9c6vh May 9 21:37:42.619: INFO: Got endpoints: latency-svc-9c6vh [813.820351ms] May 9 21:37:42.652: INFO: Created: latency-svc-4gz29 May 9 21:37:42.680: INFO: Got endpoints: latency-svc-4gz29 [809.451518ms] May 9 21:37:42.722: INFO: Created: latency-svc-d5f5k May 9 21:37:42.725: INFO: Got endpoints: latency-svc-d5f5k [806.323507ms] May 9 21:37:42.750: INFO: Created: latency-svc-gsvnn May 9 21:37:42.764: INFO: Got endpoints: latency-svc-gsvnn [760.015845ms] May 9 21:37:42.786: INFO: Created: latency-svc-zj7k5 May 9 21:37:42.801: INFO: Got endpoints: latency-svc-zj7k5 [754.833397ms] May 9 21:37:42.822: INFO: Created: latency-svc-z66j8 May 9 21:37:42.866: INFO: Got endpoints: latency-svc-z66j8 [712.9096ms] May 9 21:37:42.879: INFO: Created: latency-svc-f4nz7 May 9 21:37:42.903: INFO: Got endpoints: latency-svc-f4nz7 [717.589001ms] May 9 21:37:42.946: INFO: Created: latency-svc-j5dp7 May 9 21:37:42.964: INFO: Got endpoints: latency-svc-j5dp7 [743.21814ms] May 9 21:37:43.014: INFO: Created: latency-svc-vg7fh May 9 21:37:43.030: INFO: Got endpoints: latency-svc-vg7fh [732.834328ms] May 9 21:37:43.056: INFO: Created: latency-svc-r8xww May 9 21:37:43.072: INFO: Got endpoints: latency-svc-r8xww [749.022011ms] May 9 21:37:43.166: INFO: Created: latency-svc-8lcvz May 9 21:37:43.203: INFO: Got endpoints: latency-svc-8lcvz [843.971587ms] May 9 21:37:43.205: INFO: Created: latency-svc-sdfdc May 9 21:37:43.217: INFO: Got endpoints: latency-svc-sdfdc [820.918459ms] May 9 21:37:43.252: INFO: Created: latency-svc-mr788 May 9 21:37:43.322: INFO: Got endpoints: latency-svc-mr788 [871.243827ms] May 9 21:37:43.332: INFO: Created: latency-svc-6922x May 9 21:37:43.344: INFO: Got endpoints: latency-svc-6922x [832.351378ms] May 9 21:37:43.362: INFO: Created: latency-svc-w5slv May 9 21:37:43.374: INFO: Got endpoints: latency-svc-w5slv [782.233366ms] May 9 21:37:43.410: INFO: Created: latency-svc-fklw9 May 9 21:37:43.453: INFO: Got endpoints: latency-svc-fklw9 [833.576285ms] May 9 21:37:43.467: INFO: Created: latency-svc-l9xm9 May 9 21:37:43.503: INFO: Got endpoints: latency-svc-l9xm9 [822.795956ms] May 9 21:37:43.545: INFO: Created: latency-svc-gcp6m May 9 21:37:43.614: INFO: Got endpoints: latency-svc-gcp6m [888.285197ms] May 9 21:37:43.668: INFO: Created: latency-svc-z7zmc May 9 21:37:43.734: INFO: Got endpoints: latency-svc-z7zmc [970.075524ms] May 9 21:37:43.749: INFO: Created: latency-svc-gqljk May 9 21:37:43.765: INFO: Got endpoints: latency-svc-gqljk [963.794573ms] May 9 21:37:43.797: INFO: Created: latency-svc-qsr5c May 9 21:37:43.807: INFO: Got endpoints: latency-svc-qsr5c [940.869304ms] May 9 21:37:43.827: INFO: Created: latency-svc-wjhkb May 9 21:37:43.832: INFO: Got endpoints: latency-svc-wjhkb [928.620223ms] May 9 21:37:43.884: INFO: Created: latency-svc-nhszs May 9 21:37:43.907: INFO: Got endpoints: latency-svc-nhszs [943.482068ms] May 9 21:37:43.908: INFO: Created: latency-svc-9889d May 9 21:37:43.922: INFO: Got endpoints: latency-svc-9889d [892.408933ms] May 9 21:37:43.944: INFO: Created: latency-svc-w4j9s May 9 21:37:43.958: INFO: Got endpoints: latency-svc-w4j9s [886.445431ms] May 9 21:37:44.028: INFO: Created: latency-svc-z7gs7 May 9 21:37:44.061: INFO: Got endpoints: latency-svc-z7gs7 [857.450793ms] May 9 21:37:44.061: INFO: Created: latency-svc-kwl52 May 9 21:37:44.073: INFO: Got endpoints: latency-svc-kwl52 [856.231886ms] May 9 21:37:44.099: INFO: Created: latency-svc-nr298 May 9 21:37:44.116: INFO: Got endpoints: latency-svc-nr298 [794.849293ms] May 9 21:37:44.178: INFO: Created: latency-svc-jxknt May 9 21:37:44.181: INFO: Got endpoints: latency-svc-jxknt [837.675272ms] May 9 21:37:44.207: INFO: Created: latency-svc-6c65r May 9 21:37:44.226: INFO: Got endpoints: latency-svc-6c65r [852.021163ms] May 9 21:37:44.240: INFO: Created: latency-svc-d5x8v May 9 21:37:44.265: INFO: Got endpoints: latency-svc-d5x8v [811.466719ms] May 9 21:37:44.349: INFO: Created: latency-svc-ppc6n May 9 21:37:44.363: INFO: Got endpoints: latency-svc-ppc6n [859.972289ms] May 9 21:37:44.403: INFO: Created: latency-svc-z5xlr May 9 21:37:44.465: INFO: Got endpoints: latency-svc-z5xlr [850.937638ms] May 9 21:37:44.477: INFO: Created: latency-svc-n82kc May 9 21:37:44.495: INFO: Got endpoints: latency-svc-n82kc [760.863657ms] May 9 21:37:44.519: INFO: Created: latency-svc-p65j9 May 9 21:37:44.540: INFO: Got endpoints: latency-svc-p65j9 [775.442022ms] May 9 21:37:44.540: INFO: Latencies: [166.607409ms 202.500236ms 412.408218ms 484.875949ms 520.639491ms 617.7933ms 641.068086ms 645.686152ms 665.954037ms 681.212191ms 683.376943ms 688.597666ms 695.259275ms 695.57085ms 701.245119ms 702.300805ms 706.347207ms 707.538593ms 708.149896ms 712.9096ms 713.610763ms 717.589001ms 720.707005ms 725.606983ms 729.165205ms 732.834328ms 736.043598ms 739.200635ms 742.728951ms 743.21814ms 744.353616ms 744.989873ms 748.531049ms 749.022011ms 752.026329ms 752.598912ms 754.535042ms 754.833397ms 755.710534ms 756.078147ms 757.815766ms 760.015845ms 760.301421ms 760.464243ms 760.863657ms 763.078643ms 765.244445ms 766.382298ms 771.973581ms 772.575001ms 774.110012ms 775.442022ms 779.384781ms 779.638462ms 780.674721ms 781.165262ms 782.233366ms 784.263691ms 785.954166ms 786.40809ms 787.937086ms 788.309862ms 790.484108ms 792.398736ms 794.448776ms 794.849293ms 795.052247ms 801.131707ms 802.787324ms 803.4646ms 803.776642ms 804.565481ms 806.259241ms 806.323507ms 806.995559ms 809.451518ms 811.466719ms 812.682493ms 813.820351ms 816.790694ms 820.806087ms 820.918459ms 822.795956ms 824.886171ms 826.627273ms 826.809723ms 827.9814ms 832.351378ms 833.576285ms 834.552911ms 836.710693ms 837.675272ms 837.996395ms 838.116628ms 840.820614ms 840.980112ms 842.937305ms 843.860564ms 843.971587ms 844.621685ms 845.148906ms 845.361026ms 846.664629ms 849.244126ms 850.637532ms 850.867156ms 850.937638ms 851.538066ms 852.021163ms 856.231886ms 857.450793ms 858.602923ms 859.972289ms 862.705862ms 864.038635ms 864.081715ms 864.525294ms 865.610524ms 867.720551ms 868.539311ms 869.306943ms 870.080315ms 870.296101ms 871.078166ms 871.243827ms 873.940368ms 874.280041ms 875.366308ms 879.221967ms 884.487774ms 884.690887ms 885.054952ms 886.445431ms 886.556252ms 888.285197ms 888.478493ms 890.179056ms 892.026447ms 892.408933ms 895.320457ms 895.486215ms 896.04889ms 901.027544ms 906.029672ms 907.597582ms 913.270138ms 913.571115ms 914.11038ms 916.481871ms 916.88783ms 917.037082ms 926.021921ms 928.105009ms 928.314216ms 928.620223ms 934.265871ms 940.869304ms 941.123784ms 941.849555ms 943.482068ms 945.562435ms 947.403632ms 948.396764ms 953.284408ms 955.416287ms 956.659034ms 963.794573ms 966.479596ms 968.96851ms 969.553621ms 970.075524ms 971.403969ms 976.509513ms 979.657718ms 983.72091ms 987.84227ms 995.922788ms 999.488969ms 1.002240952s 1.003716596s 1.007381114s 1.01260357s 1.018010216s 1.019913363s 1.02965259s 1.030022343s 1.031209515s 1.032794488s 1.043766024s 1.044858751s 1.047013437s 1.048866417s 1.051114667s 1.054100447s 1.057200177s 1.057815725s 1.081890026s 1.09210655s 1.095313097s 1.10094266s] May 9 21:37:44.541: INFO: 50 %ile: 845.148906ms May 9 21:37:44.541: INFO: 90 %ile: 1.007381114s May 9 21:37:44.541: INFO: 99 %ile: 1.095313097s May 9 21:37:44.541: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:37:44.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7080" for this suite. • [SLOW TEST:14.983 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":83,"skipped":1267,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:37:44.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c7c32de0-aad9-497f-bded-740c4a928337 STEP: Creating a pod to test consume configMaps May 9 21:37:44.665: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620" in namespace "projected-6292" to be "success or failure" May 9 21:37:44.669: INFO: Pod "pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125837ms May 9 21:37:46.672: INFO: Pod "pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00716727s May 9 21:37:48.676: INFO: Pod "pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011284726s STEP: Saw pod success May 9 21:37:48.676: INFO: Pod "pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620" satisfied condition "success or failure" May 9 21:37:48.679: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620 container projected-configmap-volume-test: STEP: delete the pod May 9 21:37:48.711: INFO: Waiting for pod pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620 to disappear May 9 21:37:48.740: INFO: Pod pod-projected-configmaps-f54b6e5c-9857-402a-81fb-66ef921be620 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:37:48.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6292" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:37:48.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 9 21:37:48.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-747' May 9 21:37:48.900: INFO: stderr: "" May 9 21:37:48.900: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 9 21:37:48.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-747' May 9 21:37:59.523: INFO: stderr: "" May 9 21:37:59.523: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:37:59.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-747" for this suite. • [SLOW TEST:10.811 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":85,"skipped":1317,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:37:59.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6158.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6158.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6158.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6158.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6158.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6158.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 21:38:08.446: INFO: DNS probes using dns-6158/dns-test-1adc49c4-7f79-4cb8-a007-e87678e4f83d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:38:08.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6158" for this suite. • [SLOW TEST:9.641 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":86,"skipped":1322,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:38:09.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:38:09.655: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 9 21:38:14.705: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 9 21:38:14.705: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 9 21:38:16.729: INFO: Creating deployment "test-rollover-deployment" May 9 21:38:16.775: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 9 21:38:18.819: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 9 21:38:18.834: INFO: Ensure that both replica sets have 1 created replica May 9 21:38:18.857: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 9 21:38:18.863: INFO: Updating deployment test-rollover-deployment May 9 21:38:18.863: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 9 21:38:20.874: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 9 21:38:21.017: INFO: Make sure deployment "test-rollover-deployment" is complete May 9 21:38:21.033: INFO: all replica sets need to contain the pod-template-hash label May 9 21:38:21.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:38:23.043: INFO: all replica sets need to contain the pod-template-hash label May 9 21:38:23.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657102, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:38:25.041: INFO: all replica sets need to contain the pod-template-hash label May 9 21:38:25.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657102, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:38:27.041: INFO: all replica sets need to contain the pod-template-hash label May 9 21:38:27.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657102, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:38:29.042: INFO: all replica sets need to contain the pod-template-hash label May 9 21:38:29.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657102, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:38:31.042: INFO: all replica sets need to contain the pod-template-hash label May 9 21:38:31.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657102, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657096, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:38:33.040: INFO: May 9 21:38:33.041: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 9 21:38:33.047: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1328 /apis/apps/v1/namespaces/deployment-1328/deployments/test-rollover-deployment fdd2b768-a39d-4fae-ad47-57cd94e43436 14803483 2 2020-05-09 21:38:16 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f97c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-09 21:38:16 +0000 UTC,LastTransitionTime:2020-05-09 21:38:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-09 21:38:32 +0000 UTC,LastTransitionTime:2020-05-09 21:38:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 9 21:38:33.051: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-1328 /apis/apps/v1/namespaces/deployment-1328/replicasets/test-rollover-deployment-574d6dfbff 99e63780-8387-42c8-ac07-fba569b4dfe2 14803472 2 2020-05-09 21:38:18 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment fdd2b768-a39d-4fae-ad47-57cd94e43436 0xc00259c647 0xc00259c648}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00259c6f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 9 21:38:33.051: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 9 21:38:33.051: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1328 /apis/apps/v1/namespaces/deployment-1328/replicasets/test-rollover-controller 045dbed1-c6e9-4f2d-9d44-2746d8ec2be9 14803481 2 2020-05-09 21:38:09 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment fdd2b768-a39d-4fae-ad47-57cd94e43436 0xc00259c567 0xc00259c568}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00259c5d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 9 21:38:33.051: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1328 /apis/apps/v1/namespaces/deployment-1328/replicasets/test-rollover-deployment-f6c94f66c 2ea05c13-264f-4fc5-884d-3bc9a62c214d 14803421 2 2020-05-09 21:38:16 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment fdd2b768-a39d-4fae-ad47-57cd94e43436 0xc00259c760 0xc00259c761}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00259c7e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 9 21:38:33.054: INFO: Pod "test-rollover-deployment-574d6dfbff-czq2n" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-czq2n test-rollover-deployment-574d6dfbff- deployment-1328 /api/v1/namespaces/deployment-1328/pods/test-rollover-deployment-574d6dfbff-czq2n 9fc8eda4-2ca4-4362-a8a4-8f9c7c049d46 14803440 0 2020-05-09 21:38:18 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 99e63780-8387-42c8-ac07-fba569b4dfe2 0xc00262aa57 0xc00262aa58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k628r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k628r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k628r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:38:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:38:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:38:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:38:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.190,StartTime:2020-05-09 21:38:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 21:38:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://cf9592776f02da25520607c3370d7a8366c291c8ce7d98a0b9018941f07f4cc0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:38:33.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1328" for this suite. • [SLOW TEST:23.860 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":87,"skipped":1331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:38:33.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:38:33.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-865" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1374,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:38:33.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-c7m8 STEP: Creating a pod to test atomic-volume-subpath May 9 21:38:33.469: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-c7m8" in namespace "subpath-2825" to be "success or failure" May 9 21:38:33.474: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407272ms May 9 21:38:35.631: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161884035s May 9 21:38:37.636: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 4.166672327s May 9 21:38:39.643: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 6.173825302s May 9 21:38:41.648: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 8.178320018s May 9 21:38:43.652: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 10.182111706s May 9 21:38:45.656: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 12.186186964s May 9 21:38:47.662: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 14.192100371s May 9 21:38:49.665: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 16.195516997s May 9 21:38:51.670: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 18.200030904s May 9 21:38:53.674: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 20.204038474s May 9 21:38:55.678: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 22.208441538s May 9 21:38:57.683: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Running", Reason="", readiness=true. Elapsed: 24.213058005s May 9 21:38:59.687: INFO: Pod "pod-subpath-test-secret-c7m8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.217294064s STEP: Saw pod success May 9 21:38:59.687: INFO: Pod "pod-subpath-test-secret-c7m8" satisfied condition "success or failure" May 9 21:38:59.689: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-c7m8 container test-container-subpath-secret-c7m8: STEP: delete the pod May 9 21:38:59.716: INFO: Waiting for pod pod-subpath-test-secret-c7m8 to disappear May 9 21:38:59.747: INFO: Pod pod-subpath-test-secret-c7m8 no longer exists STEP: Deleting pod pod-subpath-test-secret-c7m8 May 9 21:38:59.747: INFO: Deleting pod "pod-subpath-test-secret-c7m8" in namespace "subpath-2825" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:38:59.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2825" for this suite. • [SLOW TEST:26.443 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":89,"skipped":1382,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:38:59.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:39:04.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6598" for this suite. • [SLOW TEST:5.157 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":90,"skipped":1392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:39:04.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1789 STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 21:39:04.962: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 21:39:31.119: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.41:8080/dial?request=hostname&protocol=http&host=10.244.1.40&port=8080&tries=1'] Namespace:pod-network-test-1789 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:39:31.119: INFO: >>> kubeConfig: /root/.kube/config I0509 21:39:31.159720 7 log.go:172] (0xc002a071e0) (0xc00107ea00) Create stream I0509 21:39:31.159757 7 log.go:172] (0xc002a071e0) (0xc00107ea00) Stream added, broadcasting: 1 I0509 21:39:31.161812 7 log.go:172] (0xc002a071e0) Reply frame received for 1 I0509 21:39:31.161854 7 log.go:172] (0xc002a071e0) (0xc00107ebe0) Create stream I0509 21:39:31.161862 7 log.go:172] (0xc002a071e0) (0xc00107ebe0) Stream added, broadcasting: 3 I0509 21:39:31.163101 7 log.go:172] (0xc002a071e0) Reply frame received for 3 I0509 21:39:31.163150 7 log.go:172] (0xc002a071e0) (0xc000ce32c0) Create stream I0509 21:39:31.163168 7 log.go:172] (0xc002a071e0) (0xc000ce32c0) Stream added, broadcasting: 5 I0509 21:39:31.164147 7 log.go:172] (0xc002a071e0) Reply frame received for 5 I0509 21:39:31.268085 7 log.go:172] (0xc002a071e0) Data frame received for 3 I0509 21:39:31.268124 7 log.go:172] (0xc00107ebe0) (3) Data frame handling I0509 21:39:31.268146 7 log.go:172] (0xc00107ebe0) (3) Data frame sent I0509 21:39:31.268789 7 log.go:172] (0xc002a071e0) Data frame received for 3 I0509 21:39:31.268820 7 log.go:172] (0xc002a071e0) Data frame received for 5 I0509 21:39:31.268841 7 log.go:172] (0xc000ce32c0) (5) Data frame handling I0509 21:39:31.268859 7 log.go:172] (0xc00107ebe0) (3) Data frame handling I0509 21:39:31.270408 7 log.go:172] (0xc002a071e0) Data frame received for 1 I0509 21:39:31.270433 7 log.go:172] (0xc00107ea00) (1) Data frame handling I0509 21:39:31.270457 7 log.go:172] (0xc00107ea00) (1) Data frame sent I0509 21:39:31.270477 7 log.go:172] (0xc002a071e0) (0xc00107ea00) Stream removed, broadcasting: 1 I0509 21:39:31.270506 7 log.go:172] (0xc002a071e0) Go away received I0509 21:39:31.270590 7 log.go:172] (0xc002a071e0) (0xc00107ea00) Stream removed, broadcasting: 1 I0509 21:39:31.270610 7 log.go:172] (0xc002a071e0) (0xc00107ebe0) Stream removed, broadcasting: 3 I0509 21:39:31.270621 7 log.go:172] (0xc002a071e0) (0xc000ce32c0) Stream removed, broadcasting: 5 May 9 21:39:31.270: INFO: Waiting for responses: map[] May 9 21:39:31.274: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.41:8080/dial?request=hostname&protocol=http&host=10.244.2.193&port=8080&tries=1'] Namespace:pod-network-test-1789 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 21:39:31.274: INFO: >>> kubeConfig: /root/.kube/config I0509 21:39:31.303166 7 log.go:172] (0xc00265ea50) (0xc000ce3860) Create stream I0509 21:39:31.303196 7 log.go:172] (0xc00265ea50) (0xc000ce3860) Stream added, broadcasting: 1 I0509 21:39:31.305619 7 log.go:172] (0xc00265ea50) Reply frame received for 1 I0509 21:39:31.305651 7 log.go:172] (0xc00265ea50) (0xc0009f2000) Create stream I0509 21:39:31.305666 7 log.go:172] (0xc00265ea50) (0xc0009f2000) Stream added, broadcasting: 3 I0509 21:39:31.306541 7 log.go:172] (0xc00265ea50) Reply frame received for 3 I0509 21:39:31.306581 7 log.go:172] (0xc00265ea50) (0xc000be8e60) Create stream I0509 21:39:31.306591 7 log.go:172] (0xc00265ea50) (0xc000be8e60) Stream added, broadcasting: 5 I0509 21:39:31.307307 7 log.go:172] (0xc00265ea50) Reply frame received for 5 I0509 21:39:31.388742 7 log.go:172] (0xc00265ea50) Data frame received for 3 I0509 21:39:31.388763 7 log.go:172] (0xc0009f2000) (3) Data frame handling I0509 21:39:31.388775 7 log.go:172] (0xc0009f2000) (3) Data frame sent I0509 21:39:31.389717 7 log.go:172] (0xc00265ea50) Data frame received for 5 I0509 21:39:31.389753 7 log.go:172] (0xc000be8e60) (5) Data frame handling I0509 21:39:31.389775 7 log.go:172] (0xc00265ea50) Data frame received for 3 I0509 21:39:31.389790 7 log.go:172] (0xc0009f2000) (3) Data frame handling I0509 21:39:31.390981 7 log.go:172] (0xc00265ea50) Data frame received for 1 I0509 21:39:31.391008 7 log.go:172] (0xc000ce3860) (1) Data frame handling I0509 21:39:31.391031 7 log.go:172] (0xc000ce3860) (1) Data frame sent I0509 21:39:31.391067 7 log.go:172] (0xc00265ea50) (0xc000ce3860) Stream removed, broadcasting: 1 I0509 21:39:31.391114 7 log.go:172] (0xc00265ea50) Go away received I0509 21:39:31.391176 7 log.go:172] (0xc00265ea50) (0xc000ce3860) Stream removed, broadcasting: 1 I0509 21:39:31.391195 7 log.go:172] (0xc00265ea50) (0xc0009f2000) Stream removed, broadcasting: 3 I0509 21:39:31.391204 7 log.go:172] (0xc00265ea50) (0xc000be8e60) Stream removed, broadcasting: 5 May 9 21:39:31.391: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:39:31.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1789" for this suite. • [SLOW TEST:26.487 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1429,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:39:31.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 9 21:39:31.547: INFO: Pod name pod-release: Found 0 pods out of 1 May 9 21:39:36.560: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:39:36.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1903" for this suite. • [SLOW TEST:5.310 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":92,"skipped":1441,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:39:36.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:39:49.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-90" for this suite. • [SLOW TEST:13.273 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":93,"skipped":1454,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:39:49.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:39:50.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1114" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":94,"skipped":1463,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:39:50.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 9 21:39:50.276: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7695" to be "success or failure" May 9 21:39:50.298: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.002205ms May 9 21:39:52.303: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026566742s May 9 21:39:54.306: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03014571s May 9 21:39:56.312: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035255913s STEP: Saw pod success May 9 21:39:56.312: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 9 21:39:56.314: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 9 21:39:56.364: INFO: Waiting for pod pod-host-path-test to disappear May 9 21:39:56.425: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:39:56.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7695" for this suite. • [SLOW TEST:6.322 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:39:56.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:40:56.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7610" for this suite. • [SLOW TEST:60.076 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1506,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:40:56.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 9 21:40:56.558: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 21:40:56.595: INFO: Waiting for terminating namespaces to be deleted... May 9 21:40:56.597: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 9 21:40:56.612: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:40:56.612: INFO: Container kindnet-cni ready: true, restart count 0 May 9 21:40:56.612: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:40:56.612: INFO: Container kube-proxy ready: true, restart count 0 May 9 21:40:56.612: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 9 21:40:56.617: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 9 21:40:56.617: INFO: Container kube-hunter ready: false, restart count 0 May 9 21:40:56.617: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:40:56.617: INFO: Container kindnet-cni ready: true, restart count 0 May 9 21:40:56.617: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 9 21:40:56.617: INFO: Container kube-bench ready: false, restart count 0 May 9 21:40:56.617: INFO: test-webserver-8035c1c3-e697-4aab-82b3-bd6b4412b1f8 from container-probe-7610 started at 2020-05-09 21:39:56 +0000 UTC (1 container statuses recorded) May 9 21:40:56.617: INFO: Container test-webserver ready: false, restart count 0 May 9 21:40:56.617: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:40:56.617: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160d7a10ec589bad], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:40:57.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3280" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":97,"skipped":1509,"failed":0} SSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:40:57.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 9 21:41:04.299: INFO: Successfully updated pod "adopt-release-nxzgc" STEP: Checking that the Job readopts the Pod May 9 21:41:04.299: INFO: Waiting up to 15m0s for pod "adopt-release-nxzgc" in namespace "job-857" to be "adopted" May 9 21:41:04.353: INFO: Pod "adopt-release-nxzgc": Phase="Running", Reason="", readiness=true. Elapsed: 54.297814ms May 9 21:41:06.420: INFO: Pod "adopt-release-nxzgc": Phase="Running", Reason="", readiness=true. Elapsed: 2.121130412s May 9 21:41:06.420: INFO: Pod "adopt-release-nxzgc" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 9 21:41:06.930: INFO: Successfully updated pod "adopt-release-nxzgc" STEP: Checking that the Job releases the Pod May 9 21:41:06.930: INFO: Waiting up to 15m0s for pod "adopt-release-nxzgc" in namespace "job-857" to be "released" May 9 21:41:06.934: INFO: Pod "adopt-release-nxzgc": Phase="Running", Reason="", readiness=true. Elapsed: 4.379916ms May 9 21:41:08.938: INFO: Pod "adopt-release-nxzgc": Phase="Running", Reason="", readiness=true. Elapsed: 2.008142833s May 9 21:41:08.938: INFO: Pod "adopt-release-nxzgc" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:41:08.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-857" for this suite. • [SLOW TEST:11.296 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":98,"skipped":1514,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:41:08.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 9 21:41:09.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 9 21:41:09.242: INFO: stderr: "" May 9 21:41:09.242: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:41:09.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4065" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":99,"skipped":1522,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:41:09.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-92a77454-9479-4e65-b9bc-6f2144974f53 STEP: Creating a pod to test consume configMaps May 9 21:41:09.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276" in namespace "configmap-7496" to be "success or failure" May 9 21:41:09.447: INFO: Pod "pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276": Phase="Pending", Reason="", readiness=false. Elapsed: 18.992717ms May 9 21:41:11.459: INFO: Pod "pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031080258s May 9 21:41:13.464: INFO: Pod "pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035453883s STEP: Saw pod success May 9 21:41:13.464: INFO: Pod "pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276" satisfied condition "success or failure" May 9 21:41:13.466: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276 container configmap-volume-test: STEP: delete the pod May 9 21:41:13.504: INFO: Waiting for pod pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276 to disappear May 9 21:41:13.522: INFO: Pod pod-configmaps-4db2e36f-5c60-419c-9528-41ca68b41276 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:41:13.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7496" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:41:13.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-t989 STEP: Creating a pod to test atomic-volume-subpath May 9 21:41:13.621: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t989" in namespace "subpath-7141" to be "success or failure" May 9 21:41:13.623: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348663ms May 9 21:41:15.627: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006519961s May 9 21:41:17.632: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 4.01107435s May 9 21:41:19.636: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 6.015071326s May 9 21:41:21.639: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 8.018200508s May 9 21:41:23.643: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 10.022276242s May 9 21:41:25.648: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 12.026753433s May 9 21:41:27.652: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 14.030832755s May 9 21:41:29.656: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 16.035021705s May 9 21:41:31.660: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 18.038743699s May 9 21:41:33.664: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 20.042919583s May 9 21:41:35.668: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Running", Reason="", readiness=true. Elapsed: 22.046823069s May 9 21:41:37.672: INFO: Pod "pod-subpath-test-configmap-t989": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050966821s STEP: Saw pod success May 9 21:41:37.672: INFO: Pod "pod-subpath-test-configmap-t989" satisfied condition "success or failure" May 9 21:41:37.675: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-t989 container test-container-subpath-configmap-t989: STEP: delete the pod May 9 21:41:37.727: INFO: Waiting for pod pod-subpath-test-configmap-t989 to disappear May 9 21:41:37.729: INFO: Pod pod-subpath-test-configmap-t989 no longer exists STEP: Deleting pod pod-subpath-test-configmap-t989 May 9 21:41:37.729: INFO: Deleting pod "pod-subpath-test-configmap-t989" in namespace "subpath-7141" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:41:37.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7141" for this suite. • [SLOW TEST:24.205 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":101,"skipped":1556,"failed":0} [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:41:37.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 9 21:41:37.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3029' May 9 21:41:37.890: INFO: stderr: "" May 9 21:41:37.890: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 9 21:41:42.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3029 -o json' May 9 21:41:43.050: INFO: stderr: "" May 9 21:41:43.050: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-09T21:41:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3029\",\n \"resourceVersion\": \"14804484\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3029/pods/e2e-test-httpd-pod\",\n \"uid\": \"6c67659b-b4ce-446e-9a36-9c9988b13011\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-67bns\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-67bns\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-67bns\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T21:41:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T21:41:40Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T21:41:40Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T21:41:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://395b25eaa8ea75d5325c105fbbbb03e736871e4ab70e646d32a5e98e35007f79\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-09T21:41:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.200\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.200\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-09T21:41:37Z\"\n }\n}\n" STEP: replace the image in the pod May 9 21:41:43.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3029' May 9 21:41:43.398: INFO: stderr: "" May 9 21:41:43.398: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 9 21:41:43.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3029' May 9 21:41:47.812: INFO: stderr: "" May 9 21:41:47.812: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:41:47.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3029" for this suite. • [SLOW TEST:10.090 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":102,"skipped":1556,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:41:47.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 9 21:41:47.913: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 21:41:47.926: INFO: Waiting for terminating namespaces to be deleted... May 9 21:41:47.928: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 9 21:41:47.932: INFO: adopt-release-nxzgc from job-857 started at 2020-05-09 21:40:57 +0000 UTC (1 container statuses recorded) May 9 21:41:47.932: INFO: Container c ready: false, restart count 0 May 9 21:41:47.932: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:41:47.932: INFO: Container kindnet-cni ready: true, restart count 0 May 9 21:41:47.932: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:41:47.932: INFO: Container kube-proxy ready: true, restart count 0 May 9 21:41:47.932: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 9 21:41:47.937: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:41:47.937: INFO: Container kube-proxy ready: true, restart count 0 May 9 21:41:47.937: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 9 21:41:47.937: INFO: Container kube-hunter ready: false, restart count 0 May 9 21:41:47.937: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:41:47.937: INFO: Container kindnet-cni ready: true, restart count 0 May 9 21:41:47.937: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 9 21:41:47.937: INFO: Container kube-bench ready: false, restart count 0 May 9 21:41:47.937: INFO: adopt-release-srjbs from job-857 started at 2020-05-09 21:40:57 +0000 UTC (1 container statuses recorded) May 9 21:41:47.937: INFO: Container c ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 9 21:41:48.084: INFO: Pod adopt-release-nxzgc requesting resource cpu=0m on Node jerma-worker May 9 21:41:48.084: INFO: Pod adopt-release-srjbs requesting resource cpu=0m on Node jerma-worker2 May 9 21:41:48.084: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 9 21:41:48.084: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 9 21:41:48.084: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 9 21:41:48.084: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 9 21:41:48.084: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 May 9 21:41:48.091: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2a0d2820-4e2a-427d-879d-cf7fb5149f3c.160d7a1ce804867f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2727/filler-pod-2a0d2820-4e2a-427d-879d-cf7fb5149f3c to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2a0d2820-4e2a-427d-879d-cf7fb5149f3c.160d7a1d33e4be06], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2a0d2820-4e2a-427d-879d-cf7fb5149f3c.160d7a1d818f311d], Reason = [Created], Message = [Created container filler-pod-2a0d2820-4e2a-427d-879d-cf7fb5149f3c] STEP: Considering event: Type = [Normal], Name = [filler-pod-2a0d2820-4e2a-427d-879d-cf7fb5149f3c.160d7a1da1d84fe7], Reason = [Started], Message = [Started container filler-pod-2a0d2820-4e2a-427d-879d-cf7fb5149f3c] STEP: Considering event: Type = [Normal], Name = [filler-pod-53e01703-76d6-4cd9-9ea3-14c8d7570c1d.160d7a1cea01682b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2727/filler-pod-53e01703-76d6-4cd9-9ea3-14c8d7570c1d to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-53e01703-76d6-4cd9-9ea3-14c8d7570c1d.160d7a1d82c930eb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-53e01703-76d6-4cd9-9ea3-14c8d7570c1d.160d7a1db95d831d], Reason = [Created], Message = [Created container filler-pod-53e01703-76d6-4cd9-9ea3-14c8d7570c1d] STEP: Considering event: Type = [Normal], Name = [filler-pod-53e01703-76d6-4cd9-9ea3-14c8d7570c1d.160d7a1dc86ea750], Reason = [Started], Message = [Started container filler-pod-53e01703-76d6-4cd9-9ea3-14c8d7570c1d] STEP: Considering event: Type = [Warning], Name = [additional-pod.160d7a1e50c3a435], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:41:55.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2727" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.474 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":103,"skipped":1578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:41:55.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:42:06.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6" for this suite. • [SLOW TEST:11.133 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":104,"skipped":1616,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:42:06.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 9 21:42:06.543: INFO: Waiting up to 5m0s for pod "downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d" in namespace "downward-api-8786" to be "success or failure" May 9 21:42:06.576: INFO: Pod "downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.320029ms May 9 21:42:08.580: INFO: Pod "downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036571173s May 9 21:42:10.585: INFO: Pod "downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041632406s STEP: Saw pod success May 9 21:42:10.585: INFO: Pod "downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d" satisfied condition "success or failure" May 9 21:42:10.588: INFO: Trying to get logs from node jerma-worker pod downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d container dapi-container: STEP: delete the pod May 9 21:42:10.650: INFO: Waiting for pod downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d to disappear May 9 21:42:10.664: INFO: Pod downward-api-2164fed6-b8b2-4c9a-8d75-a3a98db2649d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:42:10.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8786" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1623,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:42:10.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:42:10.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb" in namespace "projected-833" to be "success or failure" May 9 21:42:10.880: INFO: Pod "downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38496ms May 9 21:42:12.884: INFO: Pod "downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006398509s May 9 21:42:14.887: INFO: Pod "downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009707547s STEP: Saw pod success May 9 21:42:14.887: INFO: Pod "downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb" satisfied condition "success or failure" May 9 21:42:14.890: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb container client-container: STEP: delete the pod May 9 21:42:14.974: INFO: Waiting for pod downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb to disappear May 9 21:42:14.982: INFO: Pod downwardapi-volume-8fc813e6-b299-40b6-abc2-8cc7bc86eacb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:42:14.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-833" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1623,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:42:14.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 9 21:42:15.117: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:42:22.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-586" for this suite. • [SLOW TEST:7.529 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":107,"skipped":1629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:42:22.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 9 21:42:22.633: INFO: Waiting up to 5m0s for pod "pod-8b323b20-f61a-4d7f-b4af-c066e960b172" in namespace "emptydir-4674" to be "success or failure" May 9 21:42:22.650: INFO: Pod "pod-8b323b20-f61a-4d7f-b4af-c066e960b172": Phase="Pending", Reason="", readiness=false. Elapsed: 16.517716ms May 9 21:42:24.702: INFO: Pod "pod-8b323b20-f61a-4d7f-b4af-c066e960b172": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068878591s May 9 21:42:26.706: INFO: Pod "pod-8b323b20-f61a-4d7f-b4af-c066e960b172": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072560891s STEP: Saw pod success May 9 21:42:26.706: INFO: Pod "pod-8b323b20-f61a-4d7f-b4af-c066e960b172" satisfied condition "success or failure" May 9 21:42:26.708: INFO: Trying to get logs from node jerma-worker pod pod-8b323b20-f61a-4d7f-b4af-c066e960b172 container test-container: STEP: delete the pod May 9 21:42:26.755: INFO: Waiting for pod pod-8b323b20-f61a-4d7f-b4af-c066e960b172 to disappear May 9 21:42:26.758: INFO: Pod pod-8b323b20-f61a-4d7f-b4af-c066e960b172 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:42:26.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4674" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1717,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:42:26.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:42:26.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 9 21:42:27.529: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-09T21:42:27Z generation:1 name:name1 resourceVersion:14804823 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9013eb33-147a-4720-ba52-8f7474ee473c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 9 21:42:37.534: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-09T21:42:37Z generation:1 name:name2 resourceVersion:14804880 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c2b2073e-d27e-42bb-a65a-5d5859be8e68] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 9 21:42:47.541: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-09T21:42:27Z generation:2 name:name1 resourceVersion:14804908 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9013eb33-147a-4720-ba52-8f7474ee473c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 9 21:42:57.547: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-09T21:42:37Z generation:2 name:name2 resourceVersion:14804938 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c2b2073e-d27e-42bb-a65a-5d5859be8e68] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 9 21:43:07.554: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-09T21:42:27Z generation:2 name:name1 resourceVersion:14804969 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9013eb33-147a-4720-ba52-8f7474ee473c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 9 21:43:17.561: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-09T21:42:37Z generation:2 name:name2 resourceVersion:14804999 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c2b2073e-d27e-42bb-a65a-5d5859be8e68] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:43:28.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-71" for this suite. • [SLOW TEST:61.310 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":109,"skipped":1718,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:43:28.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:43:28.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1" in namespace "downward-api-2099" to be "success or failure" May 9 21:43:28.196: INFO: Pod "downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.454007ms May 9 21:43:30.199: INFO: Pod "downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007089408s May 9 21:43:32.203: INFO: Pod "downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010285486s STEP: Saw pod success May 9 21:43:32.203: INFO: Pod "downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1" satisfied condition "success or failure" May 9 21:43:32.204: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1 container client-container: STEP: delete the pod May 9 21:43:32.230: INFO: Waiting for pod downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1 to disappear May 9 21:43:32.246: INFO: Pod downwardapi-volume-c721718d-193e-4709-9960-07643cbbc7f1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:43:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2099" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1749,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:43:32.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 9 21:43:32.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-13' May 9 21:43:32.766: INFO: stderr: "" May 9 21:43:32.766: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 21:43:32.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-13' May 9 21:43:32.868: INFO: stderr: "" May 9 21:43:32.868: INFO: stdout: "update-demo-nautilus-48kx7 update-demo-nautilus-gcqkp " May 9 21:43:32.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48kx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:43:32.955: INFO: stderr: "" May 9 21:43:32.955: INFO: stdout: "" May 9 21:43:32.955: INFO: update-demo-nautilus-48kx7 is created but not running May 9 21:43:37.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-13' May 9 21:43:38.058: INFO: stderr: "" May 9 21:43:38.058: INFO: stdout: "update-demo-nautilus-48kx7 update-demo-nautilus-gcqkp " May 9 21:43:38.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48kx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:43:38.154: INFO: stderr: "" May 9 21:43:38.154: INFO: stdout: "true" May 9 21:43:38.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48kx7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:43:38.254: INFO: stderr: "" May 9 21:43:38.254: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:43:38.254: INFO: validating pod update-demo-nautilus-48kx7 May 9 21:43:38.258: INFO: got data: { "image": "nautilus.jpg" } May 9 21:43:38.258: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:43:38.259: INFO: update-demo-nautilus-48kx7 is verified up and running May 9 21:43:38.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gcqkp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:43:38.345: INFO: stderr: "" May 9 21:43:38.345: INFO: stdout: "true" May 9 21:43:38.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gcqkp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:43:38.435: INFO: stderr: "" May 9 21:43:38.435: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 21:43:38.435: INFO: validating pod update-demo-nautilus-gcqkp May 9 21:43:38.439: INFO: got data: { "image": "nautilus.jpg" } May 9 21:43:38.439: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 21:43:38.439: INFO: update-demo-nautilus-gcqkp is verified up and running STEP: rolling-update to new replication controller May 9 21:43:38.441: INFO: scanned /root for discovery docs: May 9 21:43:38.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-13' May 9 21:44:01.053: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 9 21:44:01.053: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 21:44:01.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-13' May 9 21:44:01.148: INFO: stderr: "" May 9 21:44:01.148: INFO: stdout: "update-demo-kitten-h8ml9 update-demo-kitten-r7czc " May 9 21:44:01.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h8ml9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:44:01.235: INFO: stderr: "" May 9 21:44:01.235: INFO: stdout: "true" May 9 21:44:01.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h8ml9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:44:01.329: INFO: stderr: "" May 9 21:44:01.329: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 9 21:44:01.329: INFO: validating pod update-demo-kitten-h8ml9 May 9 21:44:01.332: INFO: got data: { "image": "kitten.jpg" } May 9 21:44:01.332: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 9 21:44:01.332: INFO: update-demo-kitten-h8ml9 is verified up and running May 9 21:44:01.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r7czc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:44:01.430: INFO: stderr: "" May 9 21:44:01.430: INFO: stdout: "true" May 9 21:44:01.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r7czc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-13' May 9 21:44:01.522: INFO: stderr: "" May 9 21:44:01.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 9 21:44:01.522: INFO: validating pod update-demo-kitten-r7czc May 9 21:44:01.526: INFO: got data: { "image": "kitten.jpg" } May 9 21:44:01.526: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 9 21:44:01.526: INFO: update-demo-kitten-r7czc is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:44:01.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-13" for this suite. • [SLOW TEST:29.278 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":111,"skipped":1749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:44:01.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 9 21:44:01.650: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9988 /api/v1/namespaces/watch-9988/configmaps/e2e-watch-test-watch-closed 1573e978-04f7-4d8f-a801-67d3f8d3d169 14805254 0 2020-05-09 21:44:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 21:44:01.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9988 /api/v1/namespaces/watch-9988/configmaps/e2e-watch-test-watch-closed 1573e978-04f7-4d8f-a801-67d3f8d3d169 14805255 0 2020-05-09 21:44:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 9 21:44:01.688: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9988 /api/v1/namespaces/watch-9988/configmaps/e2e-watch-test-watch-closed 1573e978-04f7-4d8f-a801-67d3f8d3d169 14805256 0 2020-05-09 21:44:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 21:44:01.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9988 /api/v1/namespaces/watch-9988/configmaps/e2e-watch-test-watch-closed 1573e978-04f7-4d8f-a801-67d3f8d3d169 14805257 0 2020-05-09 21:44:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:44:01.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9988" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":112,"skipped":1790,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:44:01.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 9 21:44:02.331: INFO: created pod pod-service-account-defaultsa May 9 21:44:02.331: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 9 21:44:02.338: INFO: created pod pod-service-account-mountsa May 9 21:44:02.338: INFO: pod pod-service-account-mountsa service account token volume mount: true May 9 21:44:02.362: INFO: created pod pod-service-account-nomountsa May 9 21:44:02.362: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 9 21:44:02.405: INFO: created pod pod-service-account-defaultsa-mountspec May 9 21:44:02.405: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 9 21:44:02.440: INFO: created pod pod-service-account-mountsa-mountspec May 9 21:44:02.441: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 9 21:44:02.483: INFO: created pod pod-service-account-nomountsa-mountspec May 9 21:44:02.483: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 9 21:44:02.567: INFO: created pod pod-service-account-defaultsa-nomountspec May 9 21:44:02.567: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 9 21:44:02.578: INFO: created pod pod-service-account-mountsa-nomountspec May 9 21:44:02.578: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 9 21:44:02.599: INFO: created pod pod-service-account-nomountsa-nomountspec May 9 21:44:02.600: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:44:02.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6410" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":113,"skipped":1801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:44:02.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4408 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4408 STEP: Creating statefulset with conflicting port in namespace statefulset-4408 STEP: Waiting until pod test-pod will start running in namespace statefulset-4408 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4408 May 9 21:44:16.992: INFO: Observed stateful pod in namespace: statefulset-4408, name: ss-0, uid: 811b0237-6965-470a-aac2-29327bfdd8db, status phase: Failed. Waiting for statefulset controller to delete. May 9 21:44:17.001: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4408 STEP: Removing pod with conflicting port in namespace statefulset-4408 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4408 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 9 21:44:23.491: INFO: Deleting all statefulset in ns statefulset-4408 May 9 21:44:23.494: INFO: Scaling statefulset ss to 0 May 9 21:44:33.517: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:44:33.520: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:44:33.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4408" for this suite. • [SLOW TEST:30.826 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":114,"skipped":1864,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:44:33.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-6621 STEP: creating replication controller nodeport-test in namespace services-6621 I0509 21:44:33.723976 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6621, replica count: 2 I0509 21:44:36.774418 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:44:39.774656 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 9 21:44:39.774: INFO: Creating new exec pod May 9 21:44:44.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6621 execpodhccsx -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 9 21:44:45.010: INFO: stderr: "I0509 21:44:44.930678 2488 log.go:172] (0xc0000f42c0) (0xc000781a40) Create stream\nI0509 21:44:44.930728 2488 log.go:172] (0xc0000f42c0) (0xc000781a40) Stream added, broadcasting: 1\nI0509 21:44:44.932110 2488 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0509 21:44:44.932139 2488 log.go:172] (0xc0000f42c0) (0xc000bc0000) Create stream\nI0509 21:44:44.932149 2488 log.go:172] (0xc0000f42c0) (0xc000bc0000) Stream added, broadcasting: 3\nI0509 21:44:44.932760 2488 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0509 21:44:44.932781 2488 log.go:172] (0xc0000f42c0) (0xc000bc00a0) Create stream\nI0509 21:44:44.932788 2488 log.go:172] (0xc0000f42c0) (0xc000bc00a0) Stream added, broadcasting: 5\nI0509 21:44:44.933810 2488 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0509 21:44:45.003179 2488 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0509 21:44:45.003242 2488 log.go:172] (0xc000bc0000) (3) Data frame handling\nI0509 21:44:45.003306 2488 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0509 21:44:45.003339 2488 log.go:172] (0xc000bc00a0) (5) Data frame handling\nI0509 21:44:45.003354 2488 log.go:172] (0xc000bc00a0) (5) Data frame sent\nI0509 21:44:45.003365 2488 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0509 21:44:45.003374 2488 log.go:172] (0xc000bc00a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0509 21:44:45.005101 2488 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0509 21:44:45.005265 2488 log.go:172] (0xc000781a40) (1) Data frame handling\nI0509 21:44:45.005292 2488 log.go:172] (0xc000781a40) (1) Data frame sent\nI0509 21:44:45.005306 2488 log.go:172] (0xc0000f42c0) (0xc000781a40) Stream removed, broadcasting: 1\nI0509 21:44:45.005365 2488 log.go:172] (0xc0000f42c0) Go away received\nI0509 21:44:45.005526 2488 log.go:172] (0xc0000f42c0) (0xc000781a40) Stream removed, broadcasting: 1\nI0509 21:44:45.005538 2488 log.go:172] (0xc0000f42c0) (0xc000bc0000) Stream removed, broadcasting: 3\nI0509 21:44:45.005544 2488 log.go:172] (0xc0000f42c0) (0xc000bc00a0) Stream removed, broadcasting: 5\n" May 9 21:44:45.010: INFO: stdout: "" May 9 21:44:45.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6621 execpodhccsx -- /bin/sh -x -c nc -zv -t -w 2 10.110.224.162 80' May 9 21:44:45.233: INFO: stderr: "I0509 21:44:45.144800 2509 log.go:172] (0xc000a7c000) (0xc000b020a0) Create stream\nI0509 21:44:45.144861 2509 log.go:172] (0xc000a7c000) (0xc000b020a0) Stream added, broadcasting: 1\nI0509 21:44:45.163536 2509 log.go:172] (0xc000a7c000) Reply frame received for 1\nI0509 21:44:45.163576 2509 log.go:172] (0xc000a7c000) (0xc000685f40) Create stream\nI0509 21:44:45.163583 2509 log.go:172] (0xc000a7c000) (0xc000685f40) Stream added, broadcasting: 3\nI0509 21:44:45.165844 2509 log.go:172] (0xc000a7c000) Reply frame received for 3\nI0509 21:44:45.165869 2509 log.go:172] (0xc000a7c000) (0xc000b02140) Create stream\nI0509 21:44:45.165875 2509 log.go:172] (0xc000a7c000) (0xc000b02140) Stream added, broadcasting: 5\nI0509 21:44:45.166683 2509 log.go:172] (0xc000a7c000) Reply frame received for 5\nI0509 21:44:45.226953 2509 log.go:172] (0xc000a7c000) Data frame received for 3\nI0509 21:44:45.226997 2509 log.go:172] (0xc000685f40) (3) Data frame handling\nI0509 21:44:45.227028 2509 log.go:172] (0xc000a7c000) Data frame received for 5\nI0509 21:44:45.227038 2509 log.go:172] (0xc000b02140) (5) Data frame handling\nI0509 21:44:45.227049 2509 log.go:172] (0xc000b02140) (5) Data frame sent\nI0509 21:44:45.227060 2509 log.go:172] (0xc000a7c000) Data frame received for 5\nI0509 21:44:45.227068 2509 log.go:172] (0xc000b02140) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.224.162 80\nConnection to 10.110.224.162 80 port [tcp/http] succeeded!\nI0509 21:44:45.228501 2509 log.go:172] (0xc000a7c000) Data frame received for 1\nI0509 21:44:45.228527 2509 log.go:172] (0xc000b020a0) (1) Data frame handling\nI0509 21:44:45.228541 2509 log.go:172] (0xc000b020a0) (1) Data frame sent\nI0509 21:44:45.228556 2509 log.go:172] (0xc000a7c000) (0xc000b020a0) Stream removed, broadcasting: 1\nI0509 21:44:45.228614 2509 log.go:172] (0xc000a7c000) Go away received\nI0509 21:44:45.228854 2509 log.go:172] (0xc000a7c000) (0xc000b020a0) Stream removed, broadcasting: 1\nI0509 21:44:45.228868 2509 log.go:172] (0xc000a7c000) (0xc000685f40) Stream removed, broadcasting: 3\nI0509 21:44:45.228875 2509 log.go:172] (0xc000a7c000) (0xc000b02140) Stream removed, broadcasting: 5\n" May 9 21:44:45.233: INFO: stdout: "" May 9 21:44:45.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6621 execpodhccsx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30209' May 9 21:44:45.446: INFO: stderr: "I0509 21:44:45.359140 2529 log.go:172] (0xc000105600) (0xc0009fa140) Create stream\nI0509 21:44:45.359193 2529 log.go:172] (0xc000105600) (0xc0009fa140) Stream added, broadcasting: 1\nI0509 21:44:45.361827 2529 log.go:172] (0xc000105600) Reply frame received for 1\nI0509 21:44:45.361892 2529 log.go:172] (0xc000105600) (0xc000667a40) Create stream\nI0509 21:44:45.361911 2529 log.go:172] (0xc000105600) (0xc000667a40) Stream added, broadcasting: 3\nI0509 21:44:45.363026 2529 log.go:172] (0xc000105600) Reply frame received for 3\nI0509 21:44:45.363061 2529 log.go:172] (0xc000105600) (0xc0003d9400) Create stream\nI0509 21:44:45.363078 2529 log.go:172] (0xc000105600) (0xc0003d9400) Stream added, broadcasting: 5\nI0509 21:44:45.364337 2529 log.go:172] (0xc000105600) Reply frame received for 5\nI0509 21:44:45.437434 2529 log.go:172] (0xc000105600) Data frame received for 5\nI0509 21:44:45.437500 2529 log.go:172] (0xc0003d9400) (5) Data frame handling\nI0509 21:44:45.437542 2529 log.go:172] (0xc0003d9400) (5) Data frame sent\nI0509 21:44:45.437571 2529 log.go:172] (0xc000105600) Data frame received for 5\nI0509 21:44:45.437585 2529 log.go:172] (0xc0003d9400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30209\nConnection to 172.17.0.10 30209 port [tcp/30209] succeeded!\nI0509 21:44:45.437612 2529 log.go:172] (0xc0003d9400) (5) Data frame sent\nI0509 21:44:45.438074 2529 log.go:172] (0xc000105600) Data frame received for 3\nI0509 21:44:45.438103 2529 log.go:172] (0xc000667a40) (3) Data frame handling\nI0509 21:44:45.438129 2529 log.go:172] (0xc000105600) Data frame received for 5\nI0509 21:44:45.438140 2529 log.go:172] (0xc0003d9400) (5) Data frame handling\nI0509 21:44:45.439852 2529 log.go:172] (0xc000105600) Data frame received for 1\nI0509 21:44:45.439896 2529 log.go:172] (0xc0009fa140) (1) Data frame handling\nI0509 21:44:45.439910 2529 log.go:172] (0xc0009fa140) (1) Data frame sent\nI0509 21:44:45.439922 2529 log.go:172] (0xc000105600) (0xc0009fa140) Stream removed, broadcasting: 1\nI0509 21:44:45.439941 2529 log.go:172] (0xc000105600) Go away received\nI0509 21:44:45.440448 2529 log.go:172] (0xc000105600) (0xc0009fa140) Stream removed, broadcasting: 1\nI0509 21:44:45.440476 2529 log.go:172] (0xc000105600) (0xc000667a40) Stream removed, broadcasting: 3\nI0509 21:44:45.440493 2529 log.go:172] (0xc000105600) (0xc0003d9400) Stream removed, broadcasting: 5\n" May 9 21:44:45.446: INFO: stdout: "" May 9 21:44:45.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6621 execpodhccsx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30209' May 9 21:44:45.646: INFO: stderr: "I0509 21:44:45.561745 2551 log.go:172] (0xc0001094a0) (0xc000a1c140) Create stream\nI0509 21:44:45.561799 2551 log.go:172] (0xc0001094a0) (0xc000a1c140) Stream added, broadcasting: 1\nI0509 21:44:45.564108 2551 log.go:172] (0xc0001094a0) Reply frame received for 1\nI0509 21:44:45.564169 2551 log.go:172] (0xc0001094a0) (0xc0007395e0) Create stream\nI0509 21:44:45.564193 2551 log.go:172] (0xc0001094a0) (0xc0007395e0) Stream added, broadcasting: 3\nI0509 21:44:45.565054 2551 log.go:172] (0xc0001094a0) Reply frame received for 3\nI0509 21:44:45.565077 2551 log.go:172] (0xc0001094a0) (0xc000701c20) Create stream\nI0509 21:44:45.565085 2551 log.go:172] (0xc0001094a0) (0xc000701c20) Stream added, broadcasting: 5\nI0509 21:44:45.566056 2551 log.go:172] (0xc0001094a0) Reply frame received for 5\nI0509 21:44:45.639094 2551 log.go:172] (0xc0001094a0) Data frame received for 5\nI0509 21:44:45.639125 2551 log.go:172] (0xc000701c20) (5) Data frame handling\nI0509 21:44:45.639134 2551 log.go:172] (0xc000701c20) (5) Data frame sent\nI0509 21:44:45.639142 2551 log.go:172] (0xc0001094a0) Data frame received for 5\nI0509 21:44:45.639151 2551 log.go:172] (0xc000701c20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30209\nConnection to 172.17.0.8 30209 port [tcp/30209] succeeded!\nI0509 21:44:45.639270 2551 log.go:172] (0xc0001094a0) Data frame received for 3\nI0509 21:44:45.639345 2551 log.go:172] (0xc0007395e0) (3) Data frame handling\nI0509 21:44:45.640842 2551 log.go:172] (0xc0001094a0) Data frame received for 1\nI0509 21:44:45.640863 2551 log.go:172] (0xc000a1c140) (1) Data frame handling\nI0509 21:44:45.640882 2551 log.go:172] (0xc000a1c140) (1) Data frame sent\nI0509 21:44:45.640979 2551 log.go:172] (0xc0001094a0) (0xc000a1c140) Stream removed, broadcasting: 1\nI0509 21:44:45.641392 2551 log.go:172] (0xc0001094a0) Go away received\nI0509 21:44:45.641439 2551 log.go:172] (0xc0001094a0) (0xc000a1c140) Stream removed, broadcasting: 1\nI0509 21:44:45.641452 2551 log.go:172] (0xc0001094a0) (0xc0007395e0) Stream removed, broadcasting: 3\nI0509 21:44:45.641460 2551 log.go:172] (0xc0001094a0) (0xc000701c20) Stream removed, broadcasting: 5\n" May 9 21:44:45.646: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:44:45.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6621" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.109 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":115,"skipped":1881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:44:45.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 9 21:44:45.737: INFO: Waiting up to 5m0s for pod "pod-13472146-5fd2-4f68-8727-784f41b6ebdf" in namespace "emptydir-6354" to be "success or failure" May 9 21:44:45.751: INFO: Pod "pod-13472146-5fd2-4f68-8727-784f41b6ebdf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.959828ms May 9 21:44:47.772: INFO: Pod "pod-13472146-5fd2-4f68-8727-784f41b6ebdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035511992s May 9 21:44:49.777: INFO: Pod "pod-13472146-5fd2-4f68-8727-784f41b6ebdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040421568s STEP: Saw pod success May 9 21:44:49.777: INFO: Pod "pod-13472146-5fd2-4f68-8727-784f41b6ebdf" satisfied condition "success or failure" May 9 21:44:49.781: INFO: Trying to get logs from node jerma-worker pod pod-13472146-5fd2-4f68-8727-784f41b6ebdf container test-container: STEP: delete the pod May 9 21:44:49.806: INFO: Waiting for pod pod-13472146-5fd2-4f68-8727-784f41b6ebdf to disappear May 9 21:44:49.810: INFO: Pod pod-13472146-5fd2-4f68-8727-784f41b6ebdf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:44:49.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6354" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1910,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:44:49.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 9 21:44:49.900: INFO: Waiting up to 5m0s for pod "client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be" in namespace "containers-4955" to be "success or failure" May 9 21:44:49.918: INFO: Pod "client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be": Phase="Pending", Reason="", readiness=false. Elapsed: 18.010002ms May 9 21:44:51.981: INFO: Pod "client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080909864s May 9 21:44:53.986: INFO: Pod "client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be": Phase="Running", Reason="", readiness=true. Elapsed: 4.085248423s May 9 21:44:55.990: INFO: Pod "client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089551604s STEP: Saw pod success May 9 21:44:55.990: INFO: Pod "client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be" satisfied condition "success or failure" May 9 21:44:55.993: INFO: Trying to get logs from node jerma-worker pod client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be container test-container: STEP: delete the pod May 9 21:44:56.033: INFO: Waiting for pod client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be to disappear May 9 21:44:56.196: INFO: Pod client-containers-75df9f8d-e1c2-4c7c-956d-f2cae444b7be no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:44:56.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4955" for this suite. • [SLOW TEST:6.388 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1913,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:44:56.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0509 21:45:08.647515 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 21:45:08.647: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:45:08.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1978" for this suite. • [SLOW TEST:12.451 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":118,"skipped":1914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:45:08.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5150 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5150 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5150 May 9 21:45:08.770: INFO: Found 0 stateful pods, waiting for 1 May 9 21:45:18.775: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 9 21:45:18.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5150 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:45:19.030: INFO: stderr: "I0509 21:45:18.911902 2573 log.go:172] (0xc000796a50) (0xc000784000) Create stream\nI0509 21:45:18.911972 2573 log.go:172] (0xc000796a50) (0xc000784000) Stream added, broadcasting: 1\nI0509 21:45:18.915380 2573 log.go:172] (0xc000796a50) Reply frame received for 1\nI0509 21:45:18.915426 2573 log.go:172] (0xc000796a50) (0xc000998000) Create stream\nI0509 21:45:18.915439 2573 log.go:172] (0xc000796a50) (0xc000998000) Stream added, broadcasting: 3\nI0509 21:45:18.916507 2573 log.go:172] (0xc000796a50) Reply frame received for 3\nI0509 21:45:18.916559 2573 log.go:172] (0xc000796a50) (0xc000784140) Create stream\nI0509 21:45:18.916587 2573 log.go:172] (0xc000796a50) (0xc000784140) Stream added, broadcasting: 5\nI0509 21:45:18.918012 2573 log.go:172] (0xc000796a50) Reply frame received for 5\nI0509 21:45:18.986013 2573 log.go:172] (0xc000796a50) Data frame received for 5\nI0509 21:45:18.986044 2573 log.go:172] (0xc000784140) (5) Data frame handling\nI0509 21:45:18.986070 2573 log.go:172] (0xc000784140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:45:19.022653 2573 log.go:172] (0xc000796a50) Data frame received for 5\nI0509 21:45:19.022712 2573 log.go:172] (0xc000784140) (5) Data frame handling\nI0509 21:45:19.022750 2573 log.go:172] (0xc000796a50) Data frame received for 3\nI0509 21:45:19.022775 2573 log.go:172] (0xc000998000) (3) Data frame handling\nI0509 21:45:19.022795 2573 log.go:172] (0xc000998000) (3) Data frame sent\nI0509 21:45:19.022814 2573 log.go:172] (0xc000796a50) Data frame received for 3\nI0509 21:45:19.022839 2573 log.go:172] (0xc000998000) (3) Data frame handling\nI0509 21:45:19.024467 2573 log.go:172] (0xc000796a50) Data frame received for 1\nI0509 21:45:19.024491 2573 log.go:172] (0xc000784000) (1) Data frame handling\nI0509 21:45:19.024502 2573 log.go:172] (0xc000784000) (1) Data frame sent\nI0509 21:45:19.024515 2573 log.go:172] (0xc000796a50) (0xc000784000) Stream removed, broadcasting: 1\nI0509 21:45:19.024536 2573 log.go:172] (0xc000796a50) Go away received\nI0509 21:45:19.024832 2573 log.go:172] (0xc000796a50) (0xc000784000) Stream removed, broadcasting: 1\nI0509 21:45:19.024854 2573 log.go:172] (0xc000796a50) (0xc000998000) Stream removed, broadcasting: 3\nI0509 21:45:19.024865 2573 log.go:172] (0xc000796a50) (0xc000784140) Stream removed, broadcasting: 5\n" May 9 21:45:19.030: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:45:19.030: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:45:19.034: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 9 21:45:29.038: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 21:45:29.038: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:45:29.071: INFO: POD NODE PHASE GRACE CONDITIONS May 9 21:45:29.071: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC }] May 9 21:45:29.071: INFO: May 9 21:45:29.071: INFO: StatefulSet ss has not reached scale 3, at 1 May 9 21:45:30.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978589752s May 9 21:45:31.079: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974556004s May 9 21:45:32.090: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970777244s May 9 21:45:33.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959806375s May 9 21:45:34.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954218174s May 9 21:45:35.106: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.948322792s May 9 21:45:36.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.943173846s May 9 21:45:37.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93801522s May 9 21:45:38.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 932.552772ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5150 May 9 21:45:39.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:45:39.342: INFO: stderr: "I0509 21:45:39.245827 2594 log.go:172] (0xc0009ee0b0) (0xc00069dd60) Create stream\nI0509 21:45:39.245873 2594 log.go:172] (0xc0009ee0b0) (0xc00069dd60) Stream added, broadcasting: 1\nI0509 21:45:39.247869 2594 log.go:172] (0xc0009ee0b0) Reply frame received for 1\nI0509 21:45:39.247909 2594 log.go:172] (0xc0009ee0b0) (0xc00069de00) Create stream\nI0509 21:45:39.247918 2594 log.go:172] (0xc0009ee0b0) (0xc00069de00) Stream added, broadcasting: 3\nI0509 21:45:39.248592 2594 log.go:172] (0xc0009ee0b0) Reply frame received for 3\nI0509 21:45:39.248616 2594 log.go:172] (0xc0009ee0b0) (0xc00069dea0) Create stream\nI0509 21:45:39.248623 2594 log.go:172] (0xc0009ee0b0) (0xc00069dea0) Stream added, broadcasting: 5\nI0509 21:45:39.249489 2594 log.go:172] (0xc0009ee0b0) Reply frame received for 5\nI0509 21:45:39.336784 2594 log.go:172] (0xc0009ee0b0) Data frame received for 3\nI0509 21:45:39.336811 2594 log.go:172] (0xc00069de00) (3) Data frame handling\nI0509 21:45:39.336837 2594 log.go:172] (0xc0009ee0b0) Data frame received for 5\nI0509 21:45:39.336899 2594 log.go:172] (0xc00069dea0) (5) Data frame handling\nI0509 21:45:39.336925 2594 log.go:172] (0xc00069dea0) (5) Data frame sent\nI0509 21:45:39.336942 2594 log.go:172] (0xc0009ee0b0) Data frame received for 5\nI0509 21:45:39.336957 2594 log.go:172] (0xc00069dea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0509 21:45:39.336981 2594 log.go:172] (0xc00069de00) (3) Data frame sent\nI0509 21:45:39.336994 2594 log.go:172] (0xc0009ee0b0) Data frame received for 3\nI0509 21:45:39.336998 2594 log.go:172] (0xc00069de00) (3) Data frame handling\nI0509 21:45:39.338553 2594 log.go:172] (0xc0009ee0b0) Data frame received for 1\nI0509 21:45:39.338582 2594 log.go:172] (0xc00069dd60) (1) Data frame handling\nI0509 21:45:39.338610 2594 log.go:172] (0xc00069dd60) (1) Data frame sent\nI0509 21:45:39.338634 2594 log.go:172] (0xc0009ee0b0) (0xc00069dd60) Stream removed, broadcasting: 1\nI0509 21:45:39.338669 2594 log.go:172] (0xc0009ee0b0) Go away received\nI0509 21:45:39.338926 2594 log.go:172] (0xc0009ee0b0) (0xc00069dd60) Stream removed, broadcasting: 1\nI0509 21:45:39.338941 2594 log.go:172] (0xc0009ee0b0) (0xc00069de00) Stream removed, broadcasting: 3\nI0509 21:45:39.338948 2594 log.go:172] (0xc0009ee0b0) (0xc00069dea0) Stream removed, broadcasting: 5\n" May 9 21:45:39.342: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:45:39.342: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:45:39.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5150 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:45:39.552: INFO: stderr: "I0509 21:45:39.476705 2614 log.go:172] (0xc00011e2c0) (0xc000651a40) Create stream\nI0509 21:45:39.476763 2614 log.go:172] (0xc00011e2c0) (0xc000651a40) Stream added, broadcasting: 1\nI0509 21:45:39.479355 2614 log.go:172] (0xc00011e2c0) Reply frame received for 1\nI0509 21:45:39.479385 2614 log.go:172] (0xc00011e2c0) (0xc000651ae0) Create stream\nI0509 21:45:39.479395 2614 log.go:172] (0xc00011e2c0) (0xc000651ae0) Stream added, broadcasting: 3\nI0509 21:45:39.480338 2614 log.go:172] (0xc00011e2c0) Reply frame received for 3\nI0509 21:45:39.480373 2614 log.go:172] (0xc00011e2c0) (0xc000022000) Create stream\nI0509 21:45:39.480387 2614 log.go:172] (0xc00011e2c0) (0xc000022000) Stream added, broadcasting: 5\nI0509 21:45:39.481638 2614 log.go:172] (0xc00011e2c0) Reply frame received for 5\nI0509 21:45:39.544910 2614 log.go:172] (0xc00011e2c0) Data frame received for 5\nI0509 21:45:39.544960 2614 log.go:172] (0xc00011e2c0) Data frame received for 3\nI0509 21:45:39.544997 2614 log.go:172] (0xc000651ae0) (3) Data frame handling\nI0509 21:45:39.545025 2614 log.go:172] (0xc000651ae0) (3) Data frame sent\nI0509 21:45:39.545043 2614 log.go:172] (0xc00011e2c0) Data frame received for 3\nI0509 21:45:39.545058 2614 log.go:172] (0xc000651ae0) (3) Data frame handling\nI0509 21:45:39.545093 2614 log.go:172] (0xc000022000) (5) Data frame handling\nI0509 21:45:39.545356 2614 log.go:172] (0xc000022000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0509 21:45:39.545585 2614 log.go:172] (0xc00011e2c0) Data frame received for 5\nI0509 21:45:39.545608 2614 log.go:172] (0xc000022000) (5) Data frame handling\nI0509 21:45:39.547229 2614 log.go:172] (0xc00011e2c0) Data frame received for 1\nI0509 21:45:39.547270 2614 log.go:172] (0xc000651a40) (1) Data frame handling\nI0509 21:45:39.547290 2614 log.go:172] (0xc000651a40) (1) Data frame sent\nI0509 21:45:39.547308 2614 log.go:172] (0xc00011e2c0) (0xc000651a40) Stream removed, broadcasting: 1\nI0509 21:45:39.547324 2614 log.go:172] (0xc00011e2c0) Go away received\nI0509 21:45:39.547784 2614 log.go:172] (0xc00011e2c0) (0xc000651a40) Stream removed, broadcasting: 1\nI0509 21:45:39.547799 2614 log.go:172] (0xc00011e2c0) (0xc000651ae0) Stream removed, broadcasting: 3\nI0509 21:45:39.547806 2614 log.go:172] (0xc00011e2c0) (0xc000022000) Stream removed, broadcasting: 5\n" May 9 21:45:39.552: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:45:39.552: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:45:39.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5150 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 9 21:45:39.763: INFO: stderr: "I0509 21:45:39.687312 2634 log.go:172] (0xc0000f5550) (0xc000661ae0) Create stream\nI0509 21:45:39.687370 2634 log.go:172] (0xc0000f5550) (0xc000661ae0) Stream added, broadcasting: 1\nI0509 21:45:39.689679 2634 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0509 21:45:39.689718 2634 log.go:172] (0xc0000f5550) (0xc0009e2000) Create stream\nI0509 21:45:39.689731 2634 log.go:172] (0xc0000f5550) (0xc0009e2000) Stream added, broadcasting: 3\nI0509 21:45:39.690747 2634 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0509 21:45:39.690790 2634 log.go:172] (0xc0000f5550) (0xc000200000) Create stream\nI0509 21:45:39.690805 2634 log.go:172] (0xc0000f5550) (0xc000200000) Stream added, broadcasting: 5\nI0509 21:45:39.691830 2634 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0509 21:45:39.757527 2634 log.go:172] (0xc0000f5550) Data frame received for 5\nI0509 21:45:39.757566 2634 log.go:172] (0xc000200000) (5) Data frame handling\nI0509 21:45:39.757578 2634 log.go:172] (0xc000200000) (5) Data frame sent\nI0509 21:45:39.757587 2634 log.go:172] (0xc0000f5550) Data frame received for 5\nI0509 21:45:39.757595 2634 log.go:172] (0xc000200000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0509 21:45:39.757618 2634 log.go:172] (0xc0000f5550) Data frame received for 3\nI0509 21:45:39.757626 2634 log.go:172] (0xc0009e2000) (3) Data frame handling\nI0509 21:45:39.757636 2634 log.go:172] (0xc0009e2000) (3) Data frame sent\nI0509 21:45:39.757644 2634 log.go:172] (0xc0000f5550) Data frame received for 3\nI0509 21:45:39.757651 2634 log.go:172] (0xc0009e2000) (3) Data frame handling\nI0509 21:45:39.759160 2634 log.go:172] (0xc0000f5550) Data frame received for 1\nI0509 21:45:39.759212 2634 log.go:172] (0xc000661ae0) (1) Data frame handling\nI0509 21:45:39.759232 2634 log.go:172] (0xc000661ae0) (1) Data frame sent\nI0509 21:45:39.759249 2634 log.go:172] (0xc0000f5550) (0xc000661ae0) Stream removed, broadcasting: 1\nI0509 21:45:39.759271 2634 log.go:172] (0xc0000f5550) Go away received\nI0509 21:45:39.759788 2634 log.go:172] (0xc0000f5550) (0xc000661ae0) Stream removed, broadcasting: 1\nI0509 21:45:39.759808 2634 log.go:172] (0xc0000f5550) (0xc0009e2000) Stream removed, broadcasting: 3\nI0509 21:45:39.759818 2634 log.go:172] (0xc0000f5550) (0xc000200000) Stream removed, broadcasting: 5\n" May 9 21:45:39.763: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 9 21:45:39.764: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 9 21:45:39.768: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 9 21:45:49.796: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 9 21:45:49.796: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 9 21:45:49.796: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 9 21:45:49.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5150 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:45:50.085: INFO: stderr: "I0509 21:45:49.977739 2657 log.go:172] (0xc0006008f0) (0xc0003ca000) Create stream\nI0509 21:45:49.977800 2657 log.go:172] (0xc0006008f0) (0xc0003ca000) Stream added, broadcasting: 1\nI0509 21:45:49.980306 2657 log.go:172] (0xc0006008f0) Reply frame received for 1\nI0509 21:45:49.980347 2657 log.go:172] (0xc0006008f0) (0xc0006f7b80) Create stream\nI0509 21:45:49.980359 2657 log.go:172] (0xc0006008f0) (0xc0006f7b80) Stream added, broadcasting: 3\nI0509 21:45:49.981509 2657 log.go:172] (0xc0006008f0) Reply frame received for 3\nI0509 21:45:49.981542 2657 log.go:172] (0xc0006008f0) (0xc000200000) Create stream\nI0509 21:45:49.981552 2657 log.go:172] (0xc0006008f0) (0xc000200000) Stream added, broadcasting: 5\nI0509 21:45:49.982437 2657 log.go:172] (0xc0006008f0) Reply frame received for 5\nI0509 21:45:50.078575 2657 log.go:172] (0xc0006008f0) Data frame received for 3\nI0509 21:45:50.078620 2657 log.go:172] (0xc0006f7b80) (3) Data frame handling\nI0509 21:45:50.078641 2657 log.go:172] (0xc0006f7b80) (3) Data frame sent\nI0509 21:45:50.078655 2657 log.go:172] (0xc0006008f0) Data frame received for 3\nI0509 21:45:50.078667 2657 log.go:172] (0xc0006f7b80) (3) Data frame handling\nI0509 21:45:50.078738 2657 log.go:172] (0xc0006008f0) Data frame received for 5\nI0509 21:45:50.078777 2657 log.go:172] (0xc000200000) (5) Data frame handling\nI0509 21:45:50.078802 2657 log.go:172] (0xc000200000) (5) Data frame sent\nI0509 21:45:50.078827 2657 log.go:172] (0xc0006008f0) Data frame received for 5\nI0509 21:45:50.078845 2657 log.go:172] (0xc000200000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:45:50.079959 2657 log.go:172] (0xc0006008f0) Data frame received for 1\nI0509 21:45:50.079980 2657 log.go:172] (0xc0003ca000) (1) Data frame handling\nI0509 21:45:50.080003 2657 log.go:172] (0xc0003ca000) (1) Data frame sent\nI0509 21:45:50.080123 2657 log.go:172] (0xc0006008f0) (0xc0003ca000) Stream removed, broadcasting: 1\nI0509 21:45:50.080152 2657 log.go:172] (0xc0006008f0) Go away received\nI0509 21:45:50.080633 2657 log.go:172] (0xc0006008f0) (0xc0003ca000) Stream removed, broadcasting: 1\nI0509 21:45:50.080669 2657 log.go:172] (0xc0006008f0) (0xc0006f7b80) Stream removed, broadcasting: 3\nI0509 21:45:50.080703 2657 log.go:172] (0xc0006008f0) (0xc000200000) Stream removed, broadcasting: 5\n" May 9 21:45:50.085: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:45:50.085: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:45:50.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5150 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:45:50.332: INFO: stderr: "I0509 21:45:50.211528 2680 log.go:172] (0xc000452630) (0xc00097e640) Create stream\nI0509 21:45:50.211587 2680 log.go:172] (0xc000452630) (0xc00097e640) Stream added, broadcasting: 1\nI0509 21:45:50.216218 2680 log.go:172] (0xc000452630) Reply frame received for 1\nI0509 21:45:50.216255 2680 log.go:172] (0xc000452630) (0xc0003db540) Create stream\nI0509 21:45:50.216263 2680 log.go:172] (0xc000452630) (0xc0003db540) Stream added, broadcasting: 3\nI0509 21:45:50.217245 2680 log.go:172] (0xc000452630) Reply frame received for 3\nI0509 21:45:50.217283 2680 log.go:172] (0xc000452630) (0xc00097e000) Create stream\nI0509 21:45:50.217295 2680 log.go:172] (0xc000452630) (0xc00097e000) Stream added, broadcasting: 5\nI0509 21:45:50.218244 2680 log.go:172] (0xc000452630) Reply frame received for 5\nI0509 21:45:50.296971 2680 log.go:172] (0xc000452630) Data frame received for 5\nI0509 21:45:50.297001 2680 log.go:172] (0xc00097e000) (5) Data frame handling\nI0509 21:45:50.297022 2680 log.go:172] (0xc00097e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:45:50.325567 2680 log.go:172] (0xc000452630) Data frame received for 3\nI0509 21:45:50.325586 2680 log.go:172] (0xc0003db540) (3) Data frame handling\nI0509 21:45:50.325594 2680 log.go:172] (0xc0003db540) (3) Data frame sent\nI0509 21:45:50.325613 2680 log.go:172] (0xc000452630) Data frame received for 5\nI0509 21:45:50.325623 2680 log.go:172] (0xc00097e000) (5) Data frame handling\nI0509 21:45:50.325653 2680 log.go:172] (0xc000452630) Data frame received for 3\nI0509 21:45:50.325672 2680 log.go:172] (0xc0003db540) (3) Data frame handling\nI0509 21:45:50.327195 2680 log.go:172] (0xc000452630) Data frame received for 1\nI0509 21:45:50.327219 2680 log.go:172] (0xc00097e640) (1) Data frame handling\nI0509 21:45:50.327245 2680 log.go:172] (0xc00097e640) (1) Data frame sent\nI0509 21:45:50.327264 2680 log.go:172] (0xc000452630) (0xc00097e640) Stream removed, broadcasting: 1\nI0509 21:45:50.327488 2680 log.go:172] (0xc000452630) Go away received\nI0509 21:45:50.327755 2680 log.go:172] (0xc000452630) (0xc00097e640) Stream removed, broadcasting: 1\nI0509 21:45:50.327780 2680 log.go:172] (0xc000452630) (0xc0003db540) Stream removed, broadcasting: 3\nI0509 21:45:50.327794 2680 log.go:172] (0xc000452630) (0xc00097e000) Stream removed, broadcasting: 5\n" May 9 21:45:50.332: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:45:50.332: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:45:50.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5150 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 9 21:45:50.572: INFO: stderr: "I0509 21:45:50.463814 2700 log.go:172] (0xc0000f5550) (0xc0007ea000) Create stream\nI0509 21:45:50.463881 2700 log.go:172] (0xc0000f5550) (0xc0007ea000) Stream added, broadcasting: 1\nI0509 21:45:50.466985 2700 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0509 21:45:50.467018 2700 log.go:172] (0xc0000f5550) (0xc0006459a0) Create stream\nI0509 21:45:50.467028 2700 log.go:172] (0xc0000f5550) (0xc0006459a0) Stream added, broadcasting: 3\nI0509 21:45:50.467884 2700 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0509 21:45:50.467912 2700 log.go:172] (0xc0000f5550) (0xc00079a000) Create stream\nI0509 21:45:50.467921 2700 log.go:172] (0xc0000f5550) (0xc00079a000) Stream added, broadcasting: 5\nI0509 21:45:50.468703 2700 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0509 21:45:50.527133 2700 log.go:172] (0xc0000f5550) Data frame received for 5\nI0509 21:45:50.527155 2700 log.go:172] (0xc00079a000) (5) Data frame handling\nI0509 21:45:50.527166 2700 log.go:172] (0xc00079a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0509 21:45:50.565320 2700 log.go:172] (0xc0000f5550) Data frame received for 3\nI0509 21:45:50.565363 2700 log.go:172] (0xc0006459a0) (3) Data frame handling\nI0509 21:45:50.565379 2700 log.go:172] (0xc0006459a0) (3) Data frame sent\nI0509 21:45:50.565492 2700 log.go:172] (0xc0000f5550) Data frame received for 5\nI0509 21:45:50.565510 2700 log.go:172] (0xc00079a000) (5) Data frame handling\nI0509 21:45:50.565667 2700 log.go:172] (0xc0000f5550) Data frame received for 3\nI0509 21:45:50.565687 2700 log.go:172] (0xc0006459a0) (3) Data frame handling\nI0509 21:45:50.567260 2700 log.go:172] (0xc0000f5550) Data frame received for 1\nI0509 21:45:50.567289 2700 log.go:172] (0xc0007ea000) (1) Data frame handling\nI0509 21:45:50.567313 2700 log.go:172] (0xc0007ea000) (1) Data frame sent\nI0509 21:45:50.567343 2700 log.go:172] (0xc0000f5550) (0xc0007ea000) Stream removed, broadcasting: 1\nI0509 21:45:50.567378 2700 log.go:172] (0xc0000f5550) Go away received\nI0509 21:45:50.567760 2700 log.go:172] (0xc0000f5550) (0xc0007ea000) Stream removed, broadcasting: 1\nI0509 21:45:50.567789 2700 log.go:172] (0xc0000f5550) (0xc0006459a0) Stream removed, broadcasting: 3\nI0509 21:45:50.567801 2700 log.go:172] (0xc0000f5550) (0xc00079a000) Stream removed, broadcasting: 5\n" May 9 21:45:50.572: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 9 21:45:50.572: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 9 21:45:50.572: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:45:50.652: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 9 21:46:00.659: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 21:46:00.659: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 9 21:46:00.659: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 9 21:46:00.678: INFO: POD NODE PHASE GRACE CONDITIONS May 9 21:46:00.678: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC }] May 9 21:46:00.678: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:00.678: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:00.678: INFO: May 9 21:46:00.678: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 21:46:01.774: INFO: POD NODE PHASE GRACE CONDITIONS May 9 21:46:01.774: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC }] May 9 21:46:01.774: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:01.774: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:01.774: INFO: May 9 21:46:01.774: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 21:46:02.778: INFO: POD NODE PHASE GRACE CONDITIONS May 9 21:46:02.778: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:08 +0000 UTC }] May 9 21:46:02.778: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:02.778: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:02.778: INFO: May 9 21:46:02.779: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 21:46:03.784: INFO: POD NODE PHASE GRACE CONDITIONS May 9 21:46:03.784: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:03.784: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 21:45:29 +0000 UTC }] May 9 21:46:03.784: INFO: May 9 21:46:03.784: INFO: StatefulSet ss has not reached scale 0, at 2 May 9 21:46:04.788: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.882017854s May 9 21:46:05.792: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.877896056s May 9 21:46:06.796: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.873871592s May 9 21:46:07.800: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.869761977s May 9 21:46:08.805: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.865779717s May 9 21:46:09.809: INFO: Verifying statefulset ss doesn't scale past 0 for another 861.01586ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5150 May 9 21:46:10.813: INFO: Scaling statefulset ss to 0 May 9 21:46:10.824: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 9 21:46:10.826: INFO: Deleting all statefulset in ns statefulset-5150 May 9 21:46:10.829: INFO: Scaling statefulset ss to 0 May 9 21:46:10.837: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:46:10.839: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:10.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5150" for this suite. • [SLOW TEST:62.203 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":119,"skipped":2005,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:10.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-a10860b5-de13-4101-83ab-f5bcacab3695 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:10.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8444" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":120,"skipped":2020,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:10.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-lt7g STEP: Creating a pod to test atomic-volume-subpath May 9 21:46:11.057: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lt7g" in namespace "subpath-9938" to be "success or failure" May 9 21:46:11.060: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Pending", Reason="", readiness=false. Elapsed: 3.10202ms May 9 21:46:13.064: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006637575s May 9 21:46:15.067: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 4.01012256s May 9 21:46:17.070: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 6.013303461s May 9 21:46:19.074: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 8.016960918s May 9 21:46:21.078: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 10.020858428s May 9 21:46:23.082: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 12.0252933s May 9 21:46:25.087: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 14.029845445s May 9 21:46:27.091: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 16.034255878s May 9 21:46:29.096: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 18.038470556s May 9 21:46:31.100: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 20.043228541s May 9 21:46:33.105: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Running", Reason="", readiness=true. Elapsed: 22.047812015s May 9 21:46:35.109: INFO: Pod "pod-subpath-test-downwardapi-lt7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051936739s STEP: Saw pod success May 9 21:46:35.109: INFO: Pod "pod-subpath-test-downwardapi-lt7g" satisfied condition "success or failure" May 9 21:46:35.112: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-lt7g container test-container-subpath-downwardapi-lt7g: STEP: delete the pod May 9 21:46:35.146: INFO: Waiting for pod pod-subpath-test-downwardapi-lt7g to disappear May 9 21:46:35.157: INFO: Pod pod-subpath-test-downwardapi-lt7g no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lt7g May 9 21:46:35.157: INFO: Deleting pod "pod-subpath-test-downwardapi-lt7g" in namespace "subpath-9938" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:35.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9938" for this suite. • [SLOW TEST:24.210 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":121,"skipped":2022,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:35.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:35.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6419" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":122,"skipped":2030,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:35.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ca3b7d97-ec52-4f2f-bb76-04d6aa16c550 STEP: Creating a pod to test consume secrets May 9 21:46:35.620: INFO: Waiting up to 5m0s for pod "pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18" in namespace "secrets-342" to be "success or failure" May 9 21:46:35.642: INFO: Pod "pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18": Phase="Pending", Reason="", readiness=false. Elapsed: 22.221565ms May 9 21:46:37.647: INFO: Pod "pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026523461s May 9 21:46:39.650: INFO: Pod "pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030403917s STEP: Saw pod success May 9 21:46:39.651: INFO: Pod "pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18" satisfied condition "success or failure" May 9 21:46:39.653: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18 container secret-volume-test: STEP: delete the pod May 9 21:46:39.714: INFO: Waiting for pod pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18 to disappear May 9 21:46:39.756: INFO: Pod pod-secrets-acfbef3c-b3c1-4244-97ad-5d570b33ad18 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:39.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-342" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2033,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:39.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:46:39.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b" in namespace "downward-api-504" to be "success or failure" May 9 21:46:39.852: INFO: Pod "downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.431403ms May 9 21:46:41.856: INFO: Pod "downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019727127s May 9 21:46:43.859: INFO: Pod "downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022857985s STEP: Saw pod success May 9 21:46:43.859: INFO: Pod "downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b" satisfied condition "success or failure" May 9 21:46:43.861: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b container client-container: STEP: delete the pod May 9 21:46:43.896: INFO: Waiting for pod downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b to disappear May 9 21:46:43.927: INFO: Pod downwardapi-volume-ac7df583-45d9-4d89-af0d-eeaa71b58a2b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:43.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-504" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:43.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:46:44.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116" in namespace "projected-2105" to be "success or failure" May 9 21:46:44.006: INFO: Pod "downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055958ms May 9 21:46:46.010: INFO: Pod "downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008020135s May 9 21:46:48.015: INFO: Pod "downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012722558s STEP: Saw pod success May 9 21:46:48.015: INFO: Pod "downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116" satisfied condition "success or failure" May 9 21:46:48.019: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116 container client-container: STEP: delete the pod May 9 21:46:48.058: INFO: Waiting for pod downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116 to disappear May 9 21:46:48.072: INFO: Pod downwardapi-volume-10f4635d-5770-4c59-84f4-d85714e20116 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:48.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2105" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2063,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:48.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8823/configmap-test-ff9e3d3a-41ca-4f7d-997e-85b7fd01f69f STEP: Creating a pod to test consume configMaps May 9 21:46:48.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9" in namespace "configmap-8823" to be "success or failure" May 9 21:46:48.236: INFO: Pod "pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.433211ms May 9 21:46:50.347: INFO: Pod "pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131362756s May 9 21:46:52.351: INFO: Pod "pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.135515776s May 9 21:46:54.355: INFO: Pod "pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138829849s STEP: Saw pod success May 9 21:46:54.355: INFO: Pod "pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9" satisfied condition "success or failure" May 9 21:46:54.357: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9 container env-test: STEP: delete the pod May 9 21:46:54.411: INFO: Waiting for pod pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9 to disappear May 9 21:46:54.443: INFO: Pod pod-configmaps-6deaa832-e170-476f-9e5e-ea2137e7abb9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:54.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8823" for this suite. • [SLOW TEST:6.369 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2085,"failed":0} SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:54.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:46:54.626: INFO: Creating deployment "test-recreate-deployment" May 9 21:46:54.642: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 9 21:46:54.668: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 9 21:46:56.676: INFO: Waiting deployment "test-recreate-deployment" to complete May 9 21:46:56.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657614, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657614, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657614, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657614, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 21:46:58.682: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 9 21:46:58.688: INFO: Updating deployment test-recreate-deployment May 9 21:46:58.688: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 9 21:46:59.205: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2362 /apis/apps/v1/namespaces/deployment-2362/deployments/test-recreate-deployment c462c7cb-5152-40d7-b68d-2355d2b21ee8 14806726 2 2020-05-09 21:46:54 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00259df78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-09 21:46:58 +0000 UTC,LastTransitionTime:2020-05-09 21:46:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-09 21:46:58 +0000 UTC,LastTransitionTime:2020-05-09 21:46:54 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 9 21:46:59.210: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2362 /apis/apps/v1/namespaces/deployment-2362/replicasets/test-recreate-deployment-5f94c574ff af9b24ff-5963-4c95-9966-c5831f647bb2 14806723 1 2020-05-09 21:46:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c462c7cb-5152-40d7-b68d-2355d2b21ee8 0xc0039ca307 0xc0039ca308}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039ca378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 9 21:46:59.210: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 9 21:46:59.210: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-2362 /apis/apps/v1/namespaces/deployment-2362/replicasets/test-recreate-deployment-799c574856 4a1cebca-2a42-4d4e-9c74-51f5a546e92f 14806715 2 2020-05-09 21:46:54 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c462c7cb-5152-40d7-b68d-2355d2b21ee8 0xc0039ca3e7 0xc0039ca3e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039ca458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 9 21:46:59.262: INFO: Pod "test-recreate-deployment-5f94c574ff-l86gk" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-l86gk test-recreate-deployment-5f94c574ff- deployment-2362 /api/v1/namespaces/deployment-2362/pods/test-recreate-deployment-5f94c574ff-l86gk 9e1e7052-3003-4175-9433-82e4ca78c939 14806727 0 2020-05-09 21:46:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff af9b24ff-5963-4c95-9966-c5831f647bb2 0xc0027985f7 0xc0027985f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ms578,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ms578,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ms578,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:46:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:46:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:46:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 21:46:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-09 21:46:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:46:59.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2362" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":127,"skipped":2088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:46:59.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:46:59.432: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 9 21:47:02.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7714 create -f -' May 9 21:47:03.632: INFO: stderr: "" May 9 21:47:03.632: INFO: stdout: "e2e-test-crd-publish-openapi-8137-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 9 21:47:03.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7714 delete e2e-test-crd-publish-openapi-8137-crds test-cr' May 9 21:47:06.963: INFO: stderr: "" May 9 21:47:06.963: INFO: stdout: "e2e-test-crd-publish-openapi-8137-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 9 21:47:06.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7714 apply -f -' May 9 21:47:07.221: INFO: stderr: "" May 9 21:47:07.221: INFO: stdout: "e2e-test-crd-publish-openapi-8137-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 9 21:47:07.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7714 delete e2e-test-crd-publish-openapi-8137-crds test-cr' May 9 21:47:07.338: INFO: stderr: "" May 9 21:47:07.339: INFO: stdout: "e2e-test-crd-publish-openapi-8137-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 9 21:47:07.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8137-crds' May 9 21:47:07.619: INFO: stderr: "" May 9 21:47:07.619: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8137-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:09.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7714" for this suite. • [SLOW TEST:10.158 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":128,"skipped":2115,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:09.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:47:09.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b" in namespace "projected-6580" to be "success or failure" May 9 21:47:09.810: INFO: Pod "downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b": Phase="Pending", Reason="", readiness=false. Elapsed: 65.834797ms May 9 21:47:11.839: INFO: Pod "downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094616706s May 9 21:47:13.843: INFO: Pod "downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098749335s STEP: Saw pod success May 9 21:47:13.843: INFO: Pod "downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b" satisfied condition "success or failure" May 9 21:47:13.846: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b container client-container: STEP: delete the pod May 9 21:47:13.912: INFO: Waiting for pod downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b to disappear May 9 21:47:13.918: INFO: Pod downwardapi-volume-eab2f88f-2590-425a-9022-a271c47a136b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6580" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2115,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:13.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-50970081-ae62-4251-b750-072285a67e49 STEP: Creating a pod to test consume secrets May 9 21:47:14.038: INFO: Waiting up to 5m0s for pod "pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896" in namespace "secrets-1141" to be "success or failure" May 9 21:47:14.054: INFO: Pod "pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896": Phase="Pending", Reason="", readiness=false. Elapsed: 15.770246ms May 9 21:47:16.180: INFO: Pod "pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142327204s May 9 21:47:18.185: INFO: Pod "pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146930755s STEP: Saw pod success May 9 21:47:18.185: INFO: Pod "pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896" satisfied condition "success or failure" May 9 21:47:18.188: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896 container secret-volume-test: STEP: delete the pod May 9 21:47:18.242: INFO: Waiting for pod pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896 to disappear May 9 21:47:18.248: INFO: Pod pod-secrets-ce9609e4-94ce-4c5a-9186-7b2c1248b896 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:18.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1141" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2115,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:18.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:47:18.510: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a6724312-a743-469c-8cf6-421d64b2d5d7", Controller:(*bool)(0xc0030cb7d2), BlockOwnerDeletion:(*bool)(0xc0030cb7d3)}} May 9 21:47:18.526: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"64fbe34f-af7a-4e43-8f91-99bfc663bb4f", Controller:(*bool)(0xc003055992), BlockOwnerDeletion:(*bool)(0xc003055993)}} May 9 21:47:18.560: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bf4334c1-8631-482e-a6ae-7579ef1ed45c", Controller:(*bool)(0xc0030cba2a), BlockOwnerDeletion:(*bool)(0xc0030cba2b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:23.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9420" for this suite. • [SLOW TEST:5.460 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":131,"skipped":2126,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:23.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 9 21:47:24.500: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 9 21:47:26.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657644, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657644, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657644, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724657644, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:47:29.548: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:47:29.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:30.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4975" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.259 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":132,"skipped":2138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:30.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8317/configmap-test-21d0d72e-df04-4177-a220-d8a458a2890a STEP: Creating a pod to test consume configMaps May 9 21:47:31.052: INFO: Waiting up to 5m0s for pod "pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292" in namespace "configmap-8317" to be "success or failure" May 9 21:47:31.077: INFO: Pod "pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292": Phase="Pending", Reason="", readiness=false. Elapsed: 25.218856ms May 9 21:47:33.082: INFO: Pod "pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029695459s May 9 21:47:35.086: INFO: Pod "pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034064653s STEP: Saw pod success May 9 21:47:35.086: INFO: Pod "pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292" satisfied condition "success or failure" May 9 21:47:35.089: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292 container env-test: STEP: delete the pod May 9 21:47:35.112: INFO: Waiting for pod pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292 to disappear May 9 21:47:35.117: INFO: Pod pod-configmaps-379d0b96-05f6-4c28-8ca8-06c63a64d292 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:35.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8317" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2166,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:35.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5ceeac46-69a7-4a6f-b1cf-6b76a74126f3 STEP: Creating a pod to test consume configMaps May 9 21:47:35.266: INFO: Waiting up to 5m0s for pod "pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1" in namespace "configmap-855" to be "success or failure" May 9 21:47:35.272: INFO: Pod "pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.380523ms May 9 21:47:37.275: INFO: Pod "pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009304761s May 9 21:47:39.345: INFO: Pod "pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078683504s STEP: Saw pod success May 9 21:47:39.345: INFO: Pod "pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1" satisfied condition "success or failure" May 9 21:47:39.347: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1 container configmap-volume-test: STEP: delete the pod May 9 21:47:39.375: INFO: Waiting for pod pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1 to disappear May 9 21:47:39.398: INFO: Pod pod-configmaps-87ba0451-2fff-4efb-a41d-fe3cdfec04b1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:39.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-855" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2183,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:39.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e7921a4c-3fb7-4fc3-8883-88a70ab15a0d STEP: Creating a pod to test consume secrets May 9 21:47:39.587: INFO: Waiting up to 5m0s for pod "pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c" in namespace "secrets-7799" to be "success or failure" May 9 21:47:39.589: INFO: Pod "pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.618051ms May 9 21:47:41.594: INFO: Pod "pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006697988s May 9 21:47:43.598: INFO: Pod "pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011032487s STEP: Saw pod success May 9 21:47:43.598: INFO: Pod "pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c" satisfied condition "success or failure" May 9 21:47:43.601: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c container secret-volume-test: STEP: delete the pod May 9 21:47:43.621: INFO: Waiting for pod pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c to disappear May 9 21:47:43.626: INFO: Pod pod-secrets-82b61004-7a55-4da0-819e-5c713c44634c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:43.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7799" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2188,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:43.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:47:43.728: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.446112ms) May 9 21:47:43.731: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.818692ms) May 9 21:47:43.734: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.960155ms) May 9 21:47:43.738: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.508342ms) May 9 21:47:43.741: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.772206ms) May 9 21:47:43.745: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.825306ms) May 9 21:47:43.748: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.942245ms) May 9 21:47:43.751: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.183587ms) May 9 21:47:43.755: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.200297ms) May 9 21:47:43.758: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.137436ms) May 9 21:47:43.761: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.674102ms) May 9 21:47:43.764: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.536803ms) May 9 21:47:43.767: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.970764ms) May 9 21:47:43.770: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.347712ms) May 9 21:47:43.774: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.318243ms) May 9 21:47:43.777: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.598964ms) May 9 21:47:43.781: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.308571ms) May 9 21:47:43.784: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.374482ms) May 9 21:47:43.788: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.392034ms) May 9 21:47:43.791: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.741506ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:47:43.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7715" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":136,"skipped":2209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:47:43.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 9 21:47:43.921: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 21:47:43.965: INFO: Waiting for terminating namespaces to be deleted... May 9 21:47:43.968: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 9 21:47:43.974: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:47:43.974: INFO: Container kindnet-cni ready: true, restart count 0 May 9 21:47:43.974: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:47:43.974: INFO: Container kube-proxy ready: true, restart count 0 May 9 21:47:43.974: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 9 21:47:43.980: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:47:43.980: INFO: Container kube-proxy ready: true, restart count 0 May 9 21:47:43.980: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 9 21:47:43.980: INFO: Container kube-hunter ready: false, restart count 0 May 9 21:47:43.980: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 21:47:43.980: INFO: Container kindnet-cni ready: true, restart count 0 May 9 21:47:43.980: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 9 21:47:43.980: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2df10376-85c4-458d-b3e6-7fd7ec3d865d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-2df10376-85c4-458d-b3e6-7fd7ec3d865d off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-2df10376-85c4-458d-b3e6-7fd7ec3d865d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:52:52.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3279" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.378 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":137,"skipped":2236,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:52:52.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:52:56.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6312" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:52:56.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1437081d-1728-4352-a89f-afa8771317d3 STEP: Creating a pod to test consume configMaps May 9 21:52:56.688: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b" in namespace "projected-1142" to be "success or failure" May 9 21:52:56.691: INFO: Pod "pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.149561ms May 9 21:52:58.695: INFO: Pod "pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007131046s May 9 21:53:00.698: INFO: Pod "pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010180271s STEP: Saw pod success May 9 21:53:00.698: INFO: Pod "pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b" satisfied condition "success or failure" May 9 21:53:00.700: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b container projected-configmap-volume-test: STEP: delete the pod May 9 21:53:00.742: INFO: Waiting for pod pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b to disappear May 9 21:53:00.758: INFO: Pod pod-projected-configmaps-ae3f12e7-ed7f-4c19-b56f-9976b447f82b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:53:00.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1142" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2296,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:53:00.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:53:00.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9" in namespace "projected-2413" to be "success or failure" May 9 21:53:00.847: INFO: Pod "downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.50797ms May 9 21:53:02.861: INFO: Pod "downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017497238s May 9 21:53:04.865: INFO: Pod "downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021326757s STEP: Saw pod success May 9 21:53:04.865: INFO: Pod "downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9" satisfied condition "success or failure" May 9 21:53:04.868: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9 container client-container: STEP: delete the pod May 9 21:53:04.942: INFO: Waiting for pod downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9 to disappear May 9 21:53:04.950: INFO: Pod downwardapi-volume-8f427019-d6e2-461a-a8bc-be990fc224a9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:53:04.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2413" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:53:04.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 9 21:53:05.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4572 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 9 21:53:08.720: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0509 21:53:08.653914 2835 log.go:172] (0xc0001066e0) (0xc00066b9a0) Create stream\nI0509 21:53:08.653968 2835 log.go:172] (0xc0001066e0) (0xc00066b9a0) Stream added, broadcasting: 1\nI0509 21:53:08.656101 2835 log.go:172] (0xc0001066e0) Reply frame received for 1\nI0509 21:53:08.656147 2835 log.go:172] (0xc0001066e0) (0xc00066ba40) Create stream\nI0509 21:53:08.656163 2835 log.go:172] (0xc0001066e0) (0xc00066ba40) Stream added, broadcasting: 3\nI0509 21:53:08.657266 2835 log.go:172] (0xc0001066e0) Reply frame received for 3\nI0509 21:53:08.657302 2835 log.go:172] (0xc0001066e0) (0xc0008aa000) Create stream\nI0509 21:53:08.657310 2835 log.go:172] (0xc0001066e0) (0xc0008aa000) Stream added, broadcasting: 5\nI0509 21:53:08.658279 2835 log.go:172] (0xc0001066e0) Reply frame received for 5\nI0509 21:53:08.658313 2835 log.go:172] (0xc0001066e0) (0xc0008aa0a0) Create stream\nI0509 21:53:08.658324 2835 log.go:172] (0xc0001066e0) (0xc0008aa0a0) Stream added, broadcasting: 7\nI0509 21:53:08.659171 2835 log.go:172] (0xc0001066e0) Reply frame received for 7\nI0509 21:53:08.659261 2835 log.go:172] (0xc00066ba40) (3) Writing data frame\nI0509 21:53:08.659325 2835 log.go:172] (0xc00066ba40) (3) Writing data frame\nI0509 21:53:08.660260 2835 log.go:172] (0xc0001066e0) Data frame received for 5\nI0509 21:53:08.660290 2835 log.go:172] (0xc0008aa000) (5) Data frame handling\nI0509 21:53:08.660324 2835 log.go:172] (0xc0008aa000) (5) Data frame sent\nI0509 21:53:08.660882 2835 log.go:172] (0xc0001066e0) Data frame received for 5\nI0509 21:53:08.660901 2835 log.go:172] (0xc0008aa000) (5) Data frame handling\nI0509 21:53:08.660921 2835 log.go:172] (0xc0008aa000) (5) Data frame sent\nI0509 21:53:08.692086 2835 log.go:172] (0xc0001066e0) Data frame received for 5\nI0509 21:53:08.692113 2835 log.go:172] (0xc0008aa000) (5) Data frame handling\nI0509 21:53:08.692162 2835 log.go:172] (0xc0001066e0) Data frame received for 7\nI0509 21:53:08.692186 2835 log.go:172] (0xc0008aa0a0) (7) Data frame handling\nI0509 21:53:08.692657 2835 log.go:172] (0xc0001066e0) Data frame received for 1\nI0509 21:53:08.692685 2835 log.go:172] (0xc00066b9a0) (1) Data frame handling\nI0509 21:53:08.692716 2835 log.go:172] (0xc00066b9a0) (1) Data frame sent\nI0509 21:53:08.692745 2835 log.go:172] (0xc0001066e0) (0xc00066b9a0) Stream removed, broadcasting: 1\nI0509 21:53:08.692860 2835 log.go:172] (0xc0001066e0) (0xc00066ba40) Stream removed, broadcasting: 3\nI0509 21:53:08.692945 2835 log.go:172] (0xc0001066e0) Go away received\nI0509 21:53:08.693394 2835 log.go:172] (0xc0001066e0) (0xc00066b9a0) Stream removed, broadcasting: 1\nI0509 21:53:08.693426 2835 log.go:172] (0xc0001066e0) (0xc00066ba40) Stream removed, broadcasting: 3\nI0509 21:53:08.693440 2835 log.go:172] (0xc0001066e0) (0xc0008aa000) Stream removed, broadcasting: 5\nI0509 21:53:08.693456 2835 log.go:172] (0xc0001066e0) (0xc0008aa0a0) Stream removed, broadcasting: 7\n" May 9 21:53:08.721: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:53:10.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4572" for this suite. • [SLOW TEST:5.782 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":141,"skipped":2332,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:53:10.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9570 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9570 I0509 21:53:10.899604 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9570, replica count: 2 I0509 21:53:13.950075 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 21:53:16.950324 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 9 21:53:16.950: INFO: Creating new exec pod May 9 21:53:21.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9570 execpoddbjr7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 9 21:53:22.209: INFO: stderr: "I0509 21:53:22.107964 2858 log.go:172] (0xc000abf600) (0xc000aa2500) Create stream\nI0509 21:53:22.108043 2858 log.go:172] (0xc000abf600) (0xc000aa2500) Stream added, broadcasting: 1\nI0509 21:53:22.113897 2858 log.go:172] (0xc000abf600) Reply frame received for 1\nI0509 21:53:22.113937 2858 log.go:172] (0xc000abf600) (0xc000757d60) Create stream\nI0509 21:53:22.113946 2858 log.go:172] (0xc000abf600) (0xc000757d60) Stream added, broadcasting: 3\nI0509 21:53:22.114964 2858 log.go:172] (0xc000abf600) Reply frame received for 3\nI0509 21:53:22.115012 2858 log.go:172] (0xc000abf600) (0xc000757e00) Create stream\nI0509 21:53:22.115027 2858 log.go:172] (0xc000abf600) (0xc000757e00) Stream added, broadcasting: 5\nI0509 21:53:22.115931 2858 log.go:172] (0xc000abf600) Reply frame received for 5\nI0509 21:53:22.202544 2858 log.go:172] (0xc000abf600) Data frame received for 3\nI0509 21:53:22.202601 2858 log.go:172] (0xc000757d60) (3) Data frame handling\nI0509 21:53:22.202638 2858 log.go:172] (0xc000abf600) Data frame received for 5\nI0509 21:53:22.202662 2858 log.go:172] (0xc000757e00) (5) Data frame handling\nI0509 21:53:22.202687 2858 log.go:172] (0xc000757e00) (5) Data frame sent\nI0509 21:53:22.202708 2858 log.go:172] (0xc000abf600) Data frame received for 5\nI0509 21:53:22.202725 2858 log.go:172] (0xc000757e00) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0509 21:53:22.204294 2858 log.go:172] (0xc000abf600) Data frame received for 1\nI0509 21:53:22.204317 2858 log.go:172] (0xc000aa2500) (1) Data frame handling\nI0509 21:53:22.204327 2858 log.go:172] (0xc000aa2500) (1) Data frame sent\nI0509 21:53:22.204338 2858 log.go:172] (0xc000abf600) (0xc000aa2500) Stream removed, broadcasting: 1\nI0509 21:53:22.204463 2858 log.go:172] (0xc000abf600) Go away received\nI0509 21:53:22.204640 2858 log.go:172] (0xc000abf600) (0xc000aa2500) Stream removed, broadcasting: 1\nI0509 21:53:22.204658 2858 log.go:172] (0xc000abf600) (0xc000757d60) Stream removed, broadcasting: 3\nI0509 21:53:22.204666 2858 log.go:172] (0xc000abf600) (0xc000757e00) Stream removed, broadcasting: 5\n" May 9 21:53:22.210: INFO: stdout: "" May 9 21:53:22.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9570 execpoddbjr7 -- /bin/sh -x -c nc -zv -t -w 2 10.108.145.149 80' May 9 21:53:22.443: INFO: stderr: "I0509 21:53:22.361654 2878 log.go:172] (0xc0000f66e0) (0xc0006f5c20) Create stream\nI0509 21:53:22.361746 2878 log.go:172] (0xc0000f66e0) (0xc0006f5c20) Stream added, broadcasting: 1\nI0509 21:53:22.365795 2878 log.go:172] (0xc0000f66e0) Reply frame received for 1\nI0509 21:53:22.365835 2878 log.go:172] (0xc0000f66e0) (0xc0006f5cc0) Create stream\nI0509 21:53:22.365847 2878 log.go:172] (0xc0000f66e0) (0xc0006f5cc0) Stream added, broadcasting: 3\nI0509 21:53:22.366955 2878 log.go:172] (0xc0000f66e0) Reply frame received for 3\nI0509 21:53:22.367001 2878 log.go:172] (0xc0000f66e0) (0xc00061e6e0) Create stream\nI0509 21:53:22.367017 2878 log.go:172] (0xc0000f66e0) (0xc00061e6e0) Stream added, broadcasting: 5\nI0509 21:53:22.368052 2878 log.go:172] (0xc0000f66e0) Reply frame received for 5\nI0509 21:53:22.437770 2878 log.go:172] (0xc0000f66e0) Data frame received for 3\nI0509 21:53:22.437793 2878 log.go:172] (0xc0006f5cc0) (3) Data frame handling\nI0509 21:53:22.437818 2878 log.go:172] (0xc0000f66e0) Data frame received for 5\nI0509 21:53:22.437839 2878 log.go:172] (0xc00061e6e0) (5) Data frame handling\nI0509 21:53:22.437853 2878 log.go:172] (0xc00061e6e0) (5) Data frame sent\nI0509 21:53:22.437860 2878 log.go:172] (0xc0000f66e0) Data frame received for 5\nI0509 21:53:22.437866 2878 log.go:172] (0xc00061e6e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.145.149 80\nConnection to 10.108.145.149 80 port [tcp/http] succeeded!\nI0509 21:53:22.438759 2878 log.go:172] (0xc0000f66e0) Data frame received for 1\nI0509 21:53:22.438772 2878 log.go:172] (0xc0006f5c20) (1) Data frame handling\nI0509 21:53:22.438783 2878 log.go:172] (0xc0006f5c20) (1) Data frame sent\nI0509 21:53:22.438795 2878 log.go:172] (0xc0000f66e0) (0xc0006f5c20) Stream removed, broadcasting: 1\nI0509 21:53:22.439045 2878 log.go:172] (0xc0000f66e0) (0xc0006f5c20) Stream removed, broadcasting: 1\nI0509 21:53:22.439060 2878 log.go:172] (0xc0000f66e0) (0xc0006f5cc0) Stream removed, broadcasting: 3\nI0509 21:53:22.439069 2878 log.go:172] (0xc0000f66e0) (0xc00061e6e0) Stream removed, broadcasting: 5\n" May 9 21:53:22.443: INFO: stdout: "" May 9 21:53:22.443: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:53:22.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9570" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.756 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":142,"skipped":2333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:53:22.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 9 21:53:22.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 9 21:53:22.759: INFO: stderr: "" May 9 21:53:22.759: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:53:22.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8480" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":143,"skipped":2372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:53:22.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-25bf13f4-f6ec-4b90-b5c3-f168fce09b45 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-25bf13f4-f6ec-4b90-b5c3-f168fce09b45 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:54:31.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3030" for this suite. • [SLOW TEST:68.516 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2432,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:54:31.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:54:31.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339" in namespace "projected-7856" to be "success or failure" May 9 21:54:31.397: INFO: Pod "downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339": Phase="Pending", Reason="", readiness=false. Elapsed: 22.15169ms May 9 21:54:33.476: INFO: Pod "downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100544094s May 9 21:54:35.480: INFO: Pod "downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104647808s STEP: Saw pod success May 9 21:54:35.480: INFO: Pod "downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339" satisfied condition "success or failure" May 9 21:54:35.483: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339 container client-container: STEP: delete the pod May 9 21:54:35.510: INFO: Waiting for pod downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339 to disappear May 9 21:54:35.524: INFO: Pod downwardapi-volume-5c5a44fd-ad19-4ec5-9011-d1890ddb3339 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:54:35.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7856" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:54:35.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9725 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9725;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9725 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9725;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9725.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9725.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9725.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9725.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9725.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9725.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9725.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 174.100.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.100.174_udp@PTR;check="$$(dig +tcp +noall +answer +search 174.100.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.100.174_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9725 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9725;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9725 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9725;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9725.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9725.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9725.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9725.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9725.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9725.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9725.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9725.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9725.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 174.100.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.100.174_udp@PTR;check="$$(dig +tcp +noall +answer +search 174.100.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.100.174_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 21:54:41.807: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.809: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.812: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.819: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.821: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.824: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.911: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.913: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.920: INFO: Unable to read jessie_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.923: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.925: INFO: Unable to read jessie_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.930: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.932: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:41.956: INFO: Lookups using dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9725 wheezy_tcp@dns-test-service.dns-9725 wheezy_udp@dns-test-service.dns-9725.svc wheezy_tcp@dns-test-service.dns-9725.svc wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9725 jessie_tcp@dns-test-service.dns-9725 jessie_udp@dns-test-service.dns-9725.svc jessie_tcp@dns-test-service.dns-9725.svc jessie_udp@_http._tcp.dns-test-service.dns-9725.svc jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc] May 9 21:54:46.960: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:46.964: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:46.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:46.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:46.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:46.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:46.979: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:46.983: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.005: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.008: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.011: INFO: Unable to read jessie_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.018: INFO: Unable to read jessie_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.021: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.024: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.027: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:47.047: INFO: Lookups using dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9725 wheezy_tcp@dns-test-service.dns-9725 wheezy_udp@dns-test-service.dns-9725.svc wheezy_tcp@dns-test-service.dns-9725.svc wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9725 jessie_tcp@dns-test-service.dns-9725 jessie_udp@dns-test-service.dns-9725.svc jessie_tcp@dns-test-service.dns-9725.svc jessie_udp@_http._tcp.dns-test-service.dns-9725.svc jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc] May 9 21:54:51.961: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:51.964: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:51.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:51.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:51.972: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:51.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:51.979: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:51.981: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.002: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.005: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.007: INFO: Unable to read jessie_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.010: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.013: INFO: Unable to read jessie_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.019: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.022: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:52.040: INFO: Lookups using dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9725 wheezy_tcp@dns-test-service.dns-9725 wheezy_udp@dns-test-service.dns-9725.svc wheezy_tcp@dns-test-service.dns-9725.svc wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9725 jessie_tcp@dns-test-service.dns-9725 jessie_udp@dns-test-service.dns-9725.svc jessie_tcp@dns-test-service.dns-9725.svc jessie_udp@_http._tcp.dns-test-service.dns-9725.svc jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc] May 9 21:54:56.959: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.962: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.974: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.976: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.978: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.995: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:56.998: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:57.001: INFO: Unable to read jessie_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:57.004: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:57.007: INFO: Unable to read jessie_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:57.010: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:57.013: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:57.016: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:54:57.036: INFO: Lookups using dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9725 wheezy_tcp@dns-test-service.dns-9725 wheezy_udp@dns-test-service.dns-9725.svc wheezy_tcp@dns-test-service.dns-9725.svc wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9725 jessie_tcp@dns-test-service.dns-9725 jessie_udp@dns-test-service.dns-9725.svc jessie_tcp@dns-test-service.dns-9725.svc jessie_udp@_http._tcp.dns-test-service.dns-9725.svc jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc] May 9 21:55:01.961: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:01.964: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:01.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:01.969: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:01.972: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:01.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:01.978: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:01.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.002: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.005: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.008: INFO: Unable to read jessie_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.011: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.014: INFO: Unable to read jessie_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.019: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.021: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:02.037: INFO: Lookups using dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9725 wheezy_tcp@dns-test-service.dns-9725 wheezy_udp@dns-test-service.dns-9725.svc wheezy_tcp@dns-test-service.dns-9725.svc wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9725 jessie_tcp@dns-test-service.dns-9725 jessie_udp@dns-test-service.dns-9725.svc jessie_tcp@dns-test-service.dns-9725.svc jessie_udp@_http._tcp.dns-test-service.dns-9725.svc jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc] May 9 21:55:06.960: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:06.964: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:06.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:06.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:06.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:06.976: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:06.979: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:06.981: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.001: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.004: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.007: INFO: Unable to read jessie_udp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.010: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725 from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.013: INFO: Unable to read jessie_udp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.019: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.022: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc from pod dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b: the server could not find the requested resource (get pods dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b) May 9 21:55:07.042: INFO: Lookups using dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9725 wheezy_tcp@dns-test-service.dns-9725 wheezy_udp@dns-test-service.dns-9725.svc wheezy_tcp@dns-test-service.dns-9725.svc wheezy_udp@_http._tcp.dns-test-service.dns-9725.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9725.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9725 jessie_tcp@dns-test-service.dns-9725 jessie_udp@dns-test-service.dns-9725.svc jessie_tcp@dns-test-service.dns-9725.svc jessie_udp@_http._tcp.dns-test-service.dns-9725.svc jessie_tcp@_http._tcp.dns-test-service.dns-9725.svc] May 9 21:55:12.072: INFO: DNS probes using dns-9725/dns-test-bd49fd4e-73bc-41e2-b180-f207d771579b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:55:12.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9725" for this suite. • [SLOW TEST:37.187 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":146,"skipped":2479,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:55:12.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-28eeb139-4624-40d2-977d-9e74ee3b8054 STEP: Creating secret with name secret-projected-all-test-volume-9e1775bd-64b9-4040-941c-f37420b5c604 STEP: Creating a pod to test Check all projections for projected volume plugin May 9 21:55:12.849: INFO: Waiting up to 5m0s for pod "projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126" in namespace "projected-3205" to be "success or failure" May 9 21:55:12.863: INFO: Pod "projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126": Phase="Pending", Reason="", readiness=false. Elapsed: 13.858134ms May 9 21:55:14.867: INFO: Pod "projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017409825s May 9 21:55:16.930: INFO: Pod "projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081155587s May 9 21:55:18.948: INFO: Pod "projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098456227s STEP: Saw pod success May 9 21:55:18.948: INFO: Pod "projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126" satisfied condition "success or failure" May 9 21:55:18.955: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126 container projected-all-volume-test: STEP: delete the pod May 9 21:55:18.972: INFO: Waiting for pod projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126 to disappear May 9 21:55:18.993: INFO: Pod projected-volume-78abe4df-e6b2-4f0e-858a-cfcd512a8126 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:55:18.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3205" for this suite. • [SLOW TEST:6.282 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2488,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:55:19.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:55:19.103: INFO: Create a RollingUpdate DaemonSet May 9 21:55:19.106: INFO: Check that daemon pods launch on every node of the cluster May 9 21:55:19.111: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:19.116: INFO: Number of nodes with available pods: 0 May 9 21:55:19.116: INFO: Node jerma-worker is running more than one daemon pod May 9 21:55:20.120: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:20.124: INFO: Number of nodes with available pods: 0 May 9 21:55:20.124: INFO: Node jerma-worker is running more than one daemon pod May 9 21:55:21.120: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:21.122: INFO: Number of nodes with available pods: 0 May 9 21:55:21.122: INFO: Node jerma-worker is running more than one daemon pod May 9 21:55:22.208: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:22.249: INFO: Number of nodes with available pods: 1 May 9 21:55:22.249: INFO: Node jerma-worker is running more than one daemon pod May 9 21:55:23.152: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:23.156: INFO: Number of nodes with available pods: 1 May 9 21:55:23.156: INFO: Node jerma-worker is running more than one daemon pod May 9 21:55:24.140: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:24.143: INFO: Number of nodes with available pods: 2 May 9 21:55:24.143: INFO: Number of running nodes: 2, number of available pods: 2 May 9 21:55:24.143: INFO: Update the DaemonSet to trigger a rollout May 9 21:55:24.148: INFO: Updating DaemonSet daemon-set May 9 21:55:30.188: INFO: Roll back the DaemonSet before rollout is complete May 9 21:55:30.194: INFO: Updating DaemonSet daemon-set May 9 21:55:30.194: INFO: Make sure DaemonSet rollback is complete May 9 21:55:30.220: INFO: Wrong image for pod: daemon-set-w5bzf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 9 21:55:30.220: INFO: Pod daemon-set-w5bzf is not available May 9 21:55:30.260: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:31.374: INFO: Wrong image for pod: daemon-set-w5bzf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 9 21:55:31.374: INFO: Pod daemon-set-w5bzf is not available May 9 21:55:31.377: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:32.264: INFO: Wrong image for pod: daemon-set-w5bzf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 9 21:55:32.264: INFO: Pod daemon-set-w5bzf is not available May 9 21:55:32.269: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 21:55:33.264: INFO: Pod daemon-set-vd7hh is not available May 9 21:55:33.269: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2930, will wait for the garbage collector to delete the pods May 9 21:55:33.334: INFO: Deleting DaemonSet.extensions daemon-set took: 6.8819ms May 9 21:55:33.634: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.258152ms May 9 21:55:39.338: INFO: Number of nodes with available pods: 0 May 9 21:55:39.338: INFO: Number of running nodes: 0, number of available pods: 0 May 9 21:55:39.341: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2930/daemonsets","resourceVersion":"14808992"},"items":null} May 9 21:55:39.344: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2930/pods","resourceVersion":"14808992"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:55:39.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2930" for this suite. • [SLOW TEST:20.364 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":148,"skipped":2490,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:55:39.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 9 21:55:39.418: INFO: Waiting up to 5m0s for pod "pod-4c48c70d-08fc-4572-b774-848623d7db46" in namespace "emptydir-3918" to be "success or failure" May 9 21:55:39.439: INFO: Pod "pod-4c48c70d-08fc-4572-b774-848623d7db46": Phase="Pending", Reason="", readiness=false. Elapsed: 20.98201ms May 9 21:55:41.443: INFO: Pod "pod-4c48c70d-08fc-4572-b774-848623d7db46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02460926s May 9 21:55:43.447: INFO: Pod "pod-4c48c70d-08fc-4572-b774-848623d7db46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029137306s STEP: Saw pod success May 9 21:55:43.447: INFO: Pod "pod-4c48c70d-08fc-4572-b774-848623d7db46" satisfied condition "success or failure" May 9 21:55:43.451: INFO: Trying to get logs from node jerma-worker2 pod pod-4c48c70d-08fc-4572-b774-848623d7db46 container test-container: STEP: delete the pod May 9 21:55:43.491: INFO: Waiting for pod pod-4c48c70d-08fc-4572-b774-848623d7db46 to disappear May 9 21:55:43.524: INFO: Pod pod-4c48c70d-08fc-4572-b774-848623d7db46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:55:43.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3918" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2503,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:55:43.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0509 21:55:44.896925 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 21:55:44.896: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:55:44.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7615" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":150,"skipped":2506,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:55:45.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0509 21:55:55.147539 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 21:55:55.147: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:55:55.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2308" for this suite. • [SLOW TEST:10.159 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":151,"skipped":2516,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:55:55.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:55:55.243: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 9 21:55:58.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4222 create -f -' May 9 21:56:01.550: INFO: stderr: "" May 9 21:56:01.550: INFO: stdout: "e2e-test-crd-publish-openapi-2141-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 9 21:56:01.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4222 delete e2e-test-crd-publish-openapi-2141-crds test-cr' May 9 21:56:01.659: INFO: stderr: "" May 9 21:56:01.659: INFO: stdout: "e2e-test-crd-publish-openapi-2141-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 9 21:56:01.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4222 apply -f -' May 9 21:56:01.909: INFO: stderr: "" May 9 21:56:01.909: INFO: stdout: "e2e-test-crd-publish-openapi-2141-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 9 21:56:01.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4222 delete e2e-test-crd-publish-openapi-2141-crds test-cr' May 9 21:56:02.030: INFO: stderr: "" May 9 21:56:02.030: INFO: stdout: "e2e-test-crd-publish-openapi-2141-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 9 21:56:02.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2141-crds' May 9 21:56:02.317: INFO: stderr: "" May 9 21:56:02.317: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2141-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:05.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4222" for this suite. • [SLOW TEST:10.061 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":152,"skipped":2524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:05.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:09.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8085" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2552,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:09.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:13.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6124" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":154,"skipped":2562,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:13.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 9 21:56:18.156: INFO: Successfully updated pod "labelsupdateff79d229-bc1c-4a83-bc9b-ccfabab710c0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:22.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4890" for this suite. • [SLOW TEST:8.652 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2564,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:22.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:56:22.856: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 21:56:25.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658182, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658182, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658182, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658182, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 21:56:28.108: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:56:28.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2625-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:29.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7704" for this suite. STEP: Destroying namespace "webhook-7704-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.180 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":156,"skipped":2569,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:29.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:56:29.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63" in namespace "projected-7634" to be "success or failure" May 9 21:56:29.428: INFO: Pod "downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809693ms May 9 21:56:31.434: INFO: Pod "downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009581342s May 9 21:56:33.530: INFO: Pod "downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106067555s STEP: Saw pod success May 9 21:56:33.530: INFO: Pod "downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63" satisfied condition "success or failure" May 9 21:56:33.657: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63 container client-container: STEP: delete the pod May 9 21:56:33.669: INFO: Waiting for pod downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63 to disappear May 9 21:56:33.674: INFO: Pod downwardapi-volume-8380e4e7-88f7-4e89-9009-322ca6738a63 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:33.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7634" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2577,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:33.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 21:56:33.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5" in namespace "downward-api-4524" to be "success or failure" May 9 21:56:33.833: INFO: Pod "downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.05315ms May 9 21:56:35.837: INFO: Pod "downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02621635s May 9 21:56:37.841: INFO: Pod "downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029559946s STEP: Saw pod success May 9 21:56:37.841: INFO: Pod "downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5" satisfied condition "success or failure" May 9 21:56:37.843: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5 container client-container: STEP: delete the pod May 9 21:56:37.894: INFO: Waiting for pod downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5 to disappear May 9 21:56:38.027: INFO: Pod downwardapi-volume-94c93759-18dc-4a48-84ad-f5dd3b6ac5f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:38.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4524" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2581,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:38.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 9 21:56:38.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5761' May 9 21:56:38.210: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 21:56:38.210: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 9 21:56:38.240: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-4qdjz] May 9 21:56:38.240: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-4qdjz" in namespace "kubectl-5761" to be "running and ready" May 9 21:56:38.302: INFO: Pod "e2e-test-httpd-rc-4qdjz": Phase="Pending", Reason="", readiness=false. Elapsed: 61.757891ms May 9 21:56:40.306: INFO: Pod "e2e-test-httpd-rc-4qdjz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065769575s May 9 21:56:42.311: INFO: Pod "e2e-test-httpd-rc-4qdjz": Phase="Running", Reason="", readiness=true. Elapsed: 4.070454077s May 9 21:56:42.311: INFO: Pod "e2e-test-httpd-rc-4qdjz" satisfied condition "running and ready" May 9 21:56:42.311: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-4qdjz] May 9 21:56:42.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5761' May 9 21:56:42.441: INFO: stderr: "" May 9 21:56:42.441: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.242. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.242. Set the 'ServerName' directive globally to suppress this message\n[Sat May 09 21:56:40.846258 2020] [mpm_event:notice] [pid 1:tid 139831230393192] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat May 09 21:56:40.846324 2020] [core:notice] [pid 1:tid 139831230393192] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 9 21:56:42.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5761' May 9 21:56:42.540: INFO: stderr: "" May 9 21:56:42.540: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:42.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5761" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":159,"skipped":2601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:42.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:53.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-16" for this suite. • [SLOW TEST:11.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":160,"skipped":2629,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:53.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 9 21:56:53.731: INFO: Waiting up to 5m0s for pod "client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322" in namespace "containers-2860" to be "success or failure" May 9 21:56:53.735: INFO: Pod "client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536246ms May 9 21:56:55.793: INFO: Pod "client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062279094s May 9 21:56:57.798: INFO: Pod "client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066728033s STEP: Saw pod success May 9 21:56:57.798: INFO: Pod "client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322" satisfied condition "success or failure" May 9 21:56:57.801: INFO: Trying to get logs from node jerma-worker2 pod client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322 container test-container: STEP: delete the pod May 9 21:56:57.836: INFO: Waiting for pod client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322 to disappear May 9 21:56:57.845: INFO: Pod client-containers-d1174983-a483-4aa4-b1b5-2d036dbfc322 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:56:57.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2860" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:56:57.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8817.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8817.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 216.39.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.39.216_udp@PTR;check="$$(dig +tcp +noall +answer +search 216.39.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.39.216_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8817.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8817.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 216.39.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.39.216_udp@PTR;check="$$(dig +tcp +noall +answer +search 216.39.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.39.216_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 21:57:04.027: INFO: Unable to read wheezy_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.030: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.032: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.035: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.051: INFO: Unable to read jessie_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.054: INFO: Unable to read jessie_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.057: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.059: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:04.077: INFO: Lookups using dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f failed for: [wheezy_udp@dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_udp@dns-test-service.dns-8817.svc.cluster.local jessie_tcp@dns-test-service.dns-8817.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local] May 9 21:57:09.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.090: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.093: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.116: INFO: Unable to read jessie_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.120: INFO: Unable to read jessie_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.123: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.125: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:09.190: INFO: Lookups using dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f failed for: [wheezy_udp@dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_udp@dns-test-service.dns-8817.svc.cluster.local jessie_tcp@dns-test-service.dns-8817.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local] May 9 21:57:14.081: INFO: Unable to read wheezy_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.088: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.091: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.113: INFO: Unable to read jessie_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.135: INFO: Unable to read jessie_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.139: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.143: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:14.163: INFO: Lookups using dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f failed for: [wheezy_udp@dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_udp@dns-test-service.dns-8817.svc.cluster.local jessie_tcp@dns-test-service.dns-8817.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local] May 9 21:57:19.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.089: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.114: INFO: Unable to read jessie_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.117: INFO: Unable to read jessie_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.121: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.124: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:19.151: INFO: Lookups using dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f failed for: [wheezy_udp@dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_udp@dns-test-service.dns-8817.svc.cluster.local jessie_tcp@dns-test-service.dns-8817.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local] May 9 21:57:24.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.090: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.094: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.117: INFO: Unable to read jessie_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.120: INFO: Unable to read jessie_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.123: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.126: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:24.145: INFO: Lookups using dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f failed for: [wheezy_udp@dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_udp@dns-test-service.dns-8817.svc.cluster.local jessie_tcp@dns-test-service.dns-8817.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local] May 9 21:57:29.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.089: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.094: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.112: INFO: Unable to read jessie_udp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local from pod dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f: the server could not find the requested resource (get pods dns-test-7df43c58-2465-45c2-b449-3259982a318f) May 9 21:57:29.149: INFO: Lookups using dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f failed for: [wheezy_udp@dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@dns-test-service.dns-8817.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_udp@dns-test-service.dns-8817.svc.cluster.local jessie_tcp@dns-test-service.dns-8817.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8817.svc.cluster.local] May 9 21:57:34.144: INFO: DNS probes using dns-8817/dns-test-7df43c58-2465-45c2-b449-3259982a318f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:57:34.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8817" for this suite. • [SLOW TEST:36.928 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":162,"skipped":2658,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:57:34.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 9 21:57:39.152: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:57:39.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2622" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2662,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:57:39.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8055 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 9 21:57:39.382: INFO: Found 0 stateful pods, waiting for 3 May 9 21:57:49.391: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 21:57:49.391: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 21:57:49.391: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 9 21:57:59.387: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 21:57:59.387: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 21:57:59.387: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 9 21:57:59.432: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 9 21:58:09.488: INFO: Updating stateful set ss2 May 9 21:58:09.522: INFO: Waiting for Pod statefulset-8055/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 9 21:58:19.723: INFO: Found 2 stateful pods, waiting for 3 May 9 21:58:29.727: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 21:58:29.727: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 21:58:29.727: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 9 21:58:29.753: INFO: Updating stateful set ss2 May 9 21:58:29.793: INFO: Waiting for Pod statefulset-8055/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 9 21:58:39.818: INFO: Updating stateful set ss2 May 9 21:58:39.834: INFO: Waiting for StatefulSet statefulset-8055/ss2 to complete update May 9 21:58:39.834: INFO: Waiting for Pod statefulset-8055/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 9 21:58:49.842: INFO: Waiting for StatefulSet statefulset-8055/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 9 21:58:59.842: INFO: Deleting all statefulset in ns statefulset-8055 May 9 21:58:59.846: INFO: Scaling statefulset ss2 to 0 May 9 21:59:19.866: INFO: Waiting for statefulset status.replicas updated to 0 May 9 21:59:19.870: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:19.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8055" for this suite. • [SLOW TEST:100.576 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":164,"skipped":2664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:19.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:59:19.937: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:26.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2354" for this suite. • [SLOW TEST:6.556 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":165,"skipped":2705,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:26.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 9 21:59:34.594: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 21:59:34.622: INFO: Pod pod-with-prestop-exec-hook still exists May 9 21:59:36.623: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 21:59:36.627: INFO: Pod pod-with-prestop-exec-hook still exists May 9 21:59:38.623: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 21:59:38.627: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:38.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1910" for this suite. • [SLOW TEST:12.203 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2724,"failed":0} SSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:38.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:38.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3181" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":167,"skipped":2730,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:38.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-71aab26a-8653-4147-977a-276fc8175807 STEP: Creating a pod to test consume secrets May 9 21:59:39.134: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98" in namespace "projected-1687" to be "success or failure" May 9 21:59:39.150: INFO: Pod "pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98": Phase="Pending", Reason="", readiness=false. Elapsed: 16.416724ms May 9 21:59:41.198: INFO: Pod "pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064099683s May 9 21:59:43.202: INFO: Pod "pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067944696s STEP: Saw pod success May 9 21:59:43.202: INFO: Pod "pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98" satisfied condition "success or failure" May 9 21:59:43.204: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98 container projected-secret-volume-test: STEP: delete the pod May 9 21:59:43.350: INFO: Waiting for pod pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98 to disappear May 9 21:59:43.444: INFO: Pod pod-projected-secrets-f94e55c5-3a52-4b5c-aa2c-ec9eca2f7c98 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:43.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1687" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:43.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 9 21:59:47.636: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:47.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8915" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2796,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:47.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 21:59:48.136: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ce091621-5167-4dd9-955c-4255aeca7193" in namespace "security-context-test-1708" to be "success or failure" May 9 21:59:48.157: INFO: Pod "busybox-readonly-false-ce091621-5167-4dd9-955c-4255aeca7193": Phase="Pending", Reason="", readiness=false. Elapsed: 20.560766ms May 9 21:59:50.189: INFO: Pod "busybox-readonly-false-ce091621-5167-4dd9-955c-4255aeca7193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052557687s May 9 21:59:52.193: INFO: Pod "busybox-readonly-false-ce091621-5167-4dd9-955c-4255aeca7193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056572915s May 9 21:59:52.193: INFO: Pod "busybox-readonly-false-ce091621-5167-4dd9-955c-4255aeca7193" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:52.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1708" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2803,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:52.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 9 21:59:57.031: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9713 pod-service-account-a508386d-9ee0-4b63-af27-799d52ca9e8b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 9 21:59:57.251: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9713 pod-service-account-a508386d-9ee0-4b63-af27-799d52ca9e8b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 9 21:59:57.501: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9713 pod-service-account-a508386d-9ee0-4b63-af27-799d52ca9e8b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 21:59:57.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9713" for this suite. • [SLOW TEST:5.537 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":171,"skipped":2803,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 21:59:57.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 21:59:58.403: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:00:00.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 22:00:02.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:00:05.513: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:00:05.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4881" for this suite. STEP: Destroying namespace "webhook-4881-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.084 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":172,"skipped":2822,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:00:05.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 9 22:00:05.878: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:00:21.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7018" for this suite. • [SLOW TEST:15.603 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":173,"skipped":2831,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:00:21.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 9 22:00:21.474: INFO: PodSpec: initContainers in spec.initContainers May 9 22:01:12.774: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-049fab66-160e-41e3-9d1a-7b8f320aaefe", GenerateName:"", Namespace:"init-container-9989", SelfLink:"/api/v1/namespaces/init-container-9989/pods/pod-init-049fab66-160e-41e3-9d1a-7b8f320aaefe", UID:"2a9e9208-f230-4f80-8e28-e493f46f7f64", ResourceVersion:"14811083", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724658421, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"474196739"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7pzxc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0058bd2c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7pzxc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7pzxc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7pzxc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024fe9f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029bed80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024feaa0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024feac0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024feac8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024feacc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658421, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658421, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658421, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658421, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.252", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.252"}}, StartTime:(*v1.Time)(0xc0043d6d00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0043d6e00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004aa3f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dd3220d3a68814081a43aeb87d8c728a4d4f6ed09eea376903366cb6172e253c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0043d6e80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0043d6d80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0024febff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:01:12.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9989" for this suite. • [SLOW TEST:51.561 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":174,"skipped":2834,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:01:12.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 9 22:01:13.144: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:01:28.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9398" for this suite. • [SLOW TEST:15.314 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":175,"skipped":2837,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:01:28.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:01:45.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9338" for this suite. • [SLOW TEST:17.185 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":176,"skipped":2860,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:01:45.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-194d9c43-24ca-4161-a7b0-8289b6189f4f STEP: Creating a pod to test consume configMaps May 9 22:01:45.564: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1" in namespace "projected-510" to be "success or failure" May 9 22:01:45.643: INFO: Pod "pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1": Phase="Pending", Reason="", readiness=false. Elapsed: 78.202491ms May 9 22:01:47.647: INFO: Pod "pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082260346s May 9 22:01:49.650: INFO: Pod "pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085830894s STEP: Saw pod success May 9 22:01:49.650: INFO: Pod "pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1" satisfied condition "success or failure" May 9 22:01:49.652: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1 container projected-configmap-volume-test: STEP: delete the pod May 9 22:01:49.678: INFO: Waiting for pod pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1 to disappear May 9 22:01:49.702: INFO: Pod pod-projected-configmaps-c7db890e-3050-4ce3-9c08-83168493ccb1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:01:49.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-510" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2864,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:01:49.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 22:01:49.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a" in namespace "projected-5482" to be "success or failure" May 9 22:01:49.792: INFO: Pod "downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.824047ms May 9 22:01:51.881: INFO: Pod "downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107244607s May 9 22:01:53.900: INFO: Pod "downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125939406s STEP: Saw pod success May 9 22:01:53.900: INFO: Pod "downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a" satisfied condition "success or failure" May 9 22:01:53.902: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a container client-container: STEP: delete the pod May 9 22:01:53.955: INFO: Waiting for pod downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a to disappear May 9 22:01:53.982: INFO: Pod downwardapi-volume-d9c5b9f4-b681-40de-909d-474136fc754a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:01:53.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5482" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2866,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:01:53.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:01:54.058: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:01:55.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9590" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":179,"skipped":2869,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:01:55.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b485f4fc-352f-40e2-9234-c5951f9724fc STEP: Creating a pod to test consume secrets May 9 22:01:55.211: INFO: Waiting up to 5m0s for pod "pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a" in namespace "secrets-6263" to be "success or failure" May 9 22:01:55.214: INFO: Pod "pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.748336ms May 9 22:01:57.219: INFO: Pod "pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007143968s May 9 22:01:59.222: INFO: Pod "pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010355194s STEP: Saw pod success May 9 22:01:59.222: INFO: Pod "pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a" satisfied condition "success or failure" May 9 22:01:59.224: INFO: Trying to get logs from node jerma-worker pod pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a container secret-volume-test: STEP: delete the pod May 9 22:01:59.247: INFO: Waiting for pod pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a to disappear May 9 22:01:59.250: INFO: Pod pod-secrets-5a006b14-e0e3-4934-bc1a-5d3482c96e9a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:01:59.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6263" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2875,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:01:59.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-1b7361df-82a0-4902-97e5-51293b8e480b STEP: Creating a pod to test consume secrets May 9 22:01:59.361: INFO: Waiting up to 5m0s for pod "pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04" in namespace "secrets-5540" to be "success or failure" May 9 22:01:59.414: INFO: Pod "pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04": Phase="Pending", Reason="", readiness=false. Elapsed: 52.378668ms May 9 22:02:01.418: INFO: Pod "pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056320089s May 9 22:02:03.422: INFO: Pod "pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060362677s STEP: Saw pod success May 9 22:02:03.422: INFO: Pod "pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04" satisfied condition "success or failure" May 9 22:02:03.424: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04 container secret-volume-test: STEP: delete the pod May 9 22:02:03.444: INFO: Waiting for pod pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04 to disappear May 9 22:02:03.448: INFO: Pod pod-secrets-9964b750-c2d7-4bc7-b5dc-e93fa5fd2b04 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:03.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5540" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2882,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:03.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 9 22:02:08.618: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:09.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6086" for this suite. • [SLOW TEST:6.184 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":182,"skipped":2897,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:09.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:02:29.840: INFO: Container started at 2020-05-09 22:02:12 +0000 UTC, pod became ready at 2020-05-09 22:02:29 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:29.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4743" for this suite. • [SLOW TEST:20.208 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2903,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:29.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-f5d15448-0b9a-471b-9612-aa2b655caa6a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:36.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9573" for this suite. • [SLOW TEST:6.227 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2924,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:36.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 9 22:02:36.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6357' May 9 22:02:36.510: INFO: stderr: "" May 9 22:02:36.510: INFO: stdout: "pod/pause created\n" May 9 22:02:36.510: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 9 22:02:36.510: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6357" to be "running and ready" May 9 22:02:36.559: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 49.227291ms May 9 22:02:38.563: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052888346s May 9 22:02:40.566: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.056081149s May 9 22:02:40.566: INFO: Pod "pause" satisfied condition "running and ready" May 9 22:02:40.566: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 9 22:02:40.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6357' May 9 22:02:40.660: INFO: stderr: "" May 9 22:02:40.660: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 9 22:02:40.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6357' May 9 22:02:40.750: INFO: stderr: "" May 9 22:02:40.750: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 9 22:02:40.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6357' May 9 22:02:40.867: INFO: stderr: "" May 9 22:02:40.867: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 9 22:02:40.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6357' May 9 22:02:40.950: INFO: stderr: "" May 9 22:02:40.950: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 9 22:02:40.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6357' May 9 22:02:41.097: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 22:02:41.097: INFO: stdout: "pod \"pause\" force deleted\n" May 9 22:02:41.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6357' May 9 22:02:41.398: INFO: stderr: "No resources found in kubectl-6357 namespace.\n" May 9 22:02:41.398: INFO: stdout: "" May 9 22:02:41.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6357 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 22:02:41.515: INFO: stderr: "" May 9 22:02:41.515: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:41.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6357" for this suite. • [SLOW TEST:5.578 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":185,"skipped":2933,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:41.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 9 22:02:41.790: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix766059585/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:42.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9814" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":186,"skipped":2953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:42.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 22:02:42.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5" in namespace "downward-api-9804" to be "success or failure" May 9 22:02:42.390: INFO: Pod "downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.107873ms May 9 22:02:44.511: INFO: Pod "downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137983357s May 9 22:02:46.515: INFO: Pod "downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142046983s STEP: Saw pod success May 9 22:02:46.516: INFO: Pod "downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5" satisfied condition "success or failure" May 9 22:02:46.519: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5 container client-container: STEP: delete the pod May 9 22:02:46.652: INFO: Waiting for pod downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5 to disappear May 9 22:02:46.708: INFO: Pod downwardapi-volume-efee35e8-2a3b-432b-9655-7307cf2db8b5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:46.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9804" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3000,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:46.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7211.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7211.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7211.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7211.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7211.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7211.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 22:02:53.062: INFO: DNS probes using dns-7211/dns-test-ae34f36b-7a59-4392-823b-8993cabc3b92 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:53.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7211" for this suite. • [SLOW TEST:6.374 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":188,"skipped":3000,"failed":0} SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:53.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:02:53.327: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:02:59.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1226" for this suite. • [SLOW TEST:6.270 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3003,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:02:59.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 22:02:59.543: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f" in namespace "downward-api-1661" to be "success or failure" May 9 22:02:59.580: INFO: Pod "downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.990558ms May 9 22:03:01.619: INFO: Pod "downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076386202s May 9 22:03:03.732: INFO: Pod "downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189106006s STEP: Saw pod success May 9 22:03:03.732: INFO: Pod "downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f" satisfied condition "success or failure" May 9 22:03:03.805: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f container client-container: STEP: delete the pod May 9 22:03:03.850: INFO: Waiting for pod downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f to disappear May 9 22:03:03.891: INFO: Pod downwardapi-volume-a063f0a4-0226-4a09-b7d9-93c67df8385f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:03:03.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1661" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3007,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:03:03.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 9 22:03:04.003: INFO: Waiting up to 5m0s for pod "pod-b7109ccf-092a-4036-924c-9a7e60919f8a" in namespace "emptydir-1484" to be "success or failure" May 9 22:03:04.020: INFO: Pod "pod-b7109ccf-092a-4036-924c-9a7e60919f8a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.663287ms May 9 22:03:06.024: INFO: Pod "pod-b7109ccf-092a-4036-924c-9a7e60919f8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020674265s May 9 22:03:08.028: INFO: Pod "pod-b7109ccf-092a-4036-924c-9a7e60919f8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024783012s STEP: Saw pod success May 9 22:03:08.028: INFO: Pod "pod-b7109ccf-092a-4036-924c-9a7e60919f8a" satisfied condition "success or failure" May 9 22:03:08.031: INFO: Trying to get logs from node jerma-worker2 pod pod-b7109ccf-092a-4036-924c-9a7e60919f8a container test-container: STEP: delete the pod May 9 22:03:08.050: INFO: Waiting for pod pod-b7109ccf-092a-4036-924c-9a7e60919f8a to disappear May 9 22:03:08.055: INFO: Pod pod-b7109ccf-092a-4036-924c-9a7e60919f8a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:03:08.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1484" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3017,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:03:08.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5527 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-5527 May 9 22:03:08.199: INFO: Found 0 stateful pods, waiting for 1 May 9 22:03:18.204: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 9 22:03:18.224: INFO: Deleting all statefulset in ns statefulset-5527 May 9 22:03:18.273: INFO: Scaling statefulset ss to 0 May 9 22:03:38.372: INFO: Waiting for statefulset status.replicas updated to 0 May 9 22:03:38.375: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:03:38.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5527" for this suite. • [SLOW TEST:30.364 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":192,"skipped":3020,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:03:38.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-31501304-e622-4336-b9b1-55e31f82ce56 STEP: Creating a pod to test consume secrets May 9 22:03:38.586: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16" in namespace "projected-4865" to be "success or failure" May 9 22:03:38.589: INFO: Pod "pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16": Phase="Pending", Reason="", readiness=false. Elapsed: 3.668001ms May 9 22:03:40.632: INFO: Pod "pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046083976s May 9 22:03:42.644: INFO: Pod "pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058142184s STEP: Saw pod success May 9 22:03:42.644: INFO: Pod "pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16" satisfied condition "success or failure" May 9 22:03:42.647: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16 container projected-secret-volume-test: STEP: delete the pod May 9 22:03:42.872: INFO: Waiting for pod pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16 to disappear May 9 22:03:42.889: INFO: Pod pod-projected-secrets-329b8c00-b941-45d5-a2c7-a50d88e6ea16 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:03:42.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4865" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3054,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:03:42.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-145d99d0-038a-4c1b-a7d1-340bc41dbbed STEP: Creating configMap with name cm-test-opt-upd-5f04b409-6347-4077-b1b5-cd250a1e7f0c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-145d99d0-038a-4c1b-a7d1-340bc41dbbed STEP: Updating configmap cm-test-opt-upd-5f04b409-6347-4077-b1b5-cd250a1e7f0c STEP: Creating configMap with name cm-test-opt-create-c0b9fa46-1bbc-45f4-96aa-a8a20a17277a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:05:21.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-315" for this suite. • [SLOW TEST:98.793 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3057,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:05:21.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:05:21.778: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.784091ms) May 9 22:05:21.781: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.023169ms) May 9 22:05:21.784: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.978323ms) May 9 22:05:21.787: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.537294ms) May 9 22:05:21.790: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.235947ms) May 9 22:05:21.793: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.186244ms) May 9 22:05:21.797: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.552365ms) May 9 22:05:21.800: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.490465ms) May 9 22:05:21.804: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.295428ms) May 9 22:05:21.807: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.60282ms) May 9 22:05:21.811: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.467209ms) May 9 22:05:21.815: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.887893ms) May 9 22:05:21.818: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.60766ms) May 9 22:05:21.822: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.448917ms) May 9 22:05:21.826: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.805815ms) May 9 22:05:21.830: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.695562ms) May 9 22:05:21.833: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.779959ms) May 9 22:05:21.837: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.768108ms) May 9 22:05:21.841: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.055427ms) May 9 22:05:21.845: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.554622ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:05:21.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1836" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":195,"skipped":3076,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:05:21.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 22:05:22.780: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:05:25.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658722, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658722, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:05:28.120: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:05:28.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4044" for this suite. STEP: Destroying namespace "webhook-4044-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.473 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":196,"skipped":3093,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:05:28.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 9 22:05:29.109: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:05:39.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7454" for this suite. • [SLOW TEST:11.165 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3100,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:05:39.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 9 22:05:39.576: INFO: Waiting up to 5m0s for pod "var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c" in namespace "var-expansion-8747" to be "success or failure" May 9 22:05:39.580: INFO: Pod "var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.982607ms May 9 22:05:41.585: INFO: Pod "var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008436336s May 9 22:05:43.589: INFO: Pod "var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012826719s STEP: Saw pod success May 9 22:05:43.589: INFO: Pod "var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c" satisfied condition "success or failure" May 9 22:05:43.592: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c container dapi-container: STEP: delete the pod May 9 22:05:43.706: INFO: Waiting for pod var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c to disappear May 9 22:05:43.736: INFO: Pod var-expansion-b86294d9-2c8c-4be3-b43c-537db1c0377c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:05:43.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8747" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3110,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:05:43.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 22:05:43.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248" in namespace "downward-api-61" to be "success or failure" May 9 22:05:43.862: INFO: Pod "downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248": Phase="Pending", Reason="", readiness=false. Elapsed: 3.648896ms May 9 22:05:45.921: INFO: Pod "downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063025937s May 9 22:05:47.926: INFO: Pod "downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06765751s STEP: Saw pod success May 9 22:05:47.926: INFO: Pod "downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248" satisfied condition "success or failure" May 9 22:05:47.928: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248 container client-container: STEP: delete the pod May 9 22:05:47.979: INFO: Waiting for pod downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248 to disappear May 9 22:05:48.078: INFO: Pod downwardapi-volume-c61b4d25-a2a9-4286-b3c2-75c0b6f34248 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:05:48.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-61" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3111,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:05:48.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c221d9d2-421d-4ba2-a4aa-b0a768657a13 STEP: Creating a pod to test consume configMaps May 9 22:05:48.224: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c" in namespace "projected-622" to be "success or failure" May 9 22:05:48.262: INFO: Pod "pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.015068ms May 9 22:05:50.307: INFO: Pod "pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082975368s May 9 22:05:52.311: INFO: Pod "pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08712531s STEP: Saw pod success May 9 22:05:52.311: INFO: Pod "pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c" satisfied condition "success or failure" May 9 22:05:52.314: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c container projected-configmap-volume-test: STEP: delete the pod May 9 22:05:52.370: INFO: Waiting for pod pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c to disappear May 9 22:05:52.381: INFO: Pod pod-projected-configmaps-636a43af-5bef-4e09-b071-a1644f0d205c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:05:52.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-622" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3114,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:05:52.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6881, will wait for the garbage collector to delete the pods May 9 22:05:58.585: INFO: Deleting Job.batch foo took: 6.778013ms May 9 22:05:58.885: INFO: Terminating Job.batch foo pods took: 300.231124ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:06:31.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6881" for this suite. • [SLOW TEST:39.504 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":201,"skipped":3134,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:06:31.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 22:06:32.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165" in namespace "projected-1986" to be "success or failure" May 9 22:06:32.031: INFO: Pod "downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165": Phase="Pending", Reason="", readiness=false. Elapsed: 3.010587ms May 9 22:06:34.127: INFO: Pod "downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098834045s May 9 22:06:36.131: INFO: Pod "downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103093645s STEP: Saw pod success May 9 22:06:36.131: INFO: Pod "downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165" satisfied condition "success or failure" May 9 22:06:36.135: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165 container client-container: STEP: delete the pod May 9 22:06:36.175: INFO: Waiting for pod downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165 to disappear May 9 22:06:36.203: INFO: Pod downwardapi-volume-4233beb9-de69-4e92-a06c-aa92638aa165 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:06:36.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1986" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3152,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:06:36.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-bfd669fe-aa4b-4566-a0e4-0ef5c0a608e4 STEP: Creating a pod to test consume configMaps May 9 22:06:36.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f" in namespace "configmap-6538" to be "success or failure" May 9 22:06:36.349: INFO: Pod "pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.926654ms May 9 22:06:38.352: INFO: Pod "pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010210564s May 9 22:06:40.355: INFO: Pod "pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013386989s May 9 22:06:42.358: INFO: Pod "pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016483979s STEP: Saw pod success May 9 22:06:42.358: INFO: Pod "pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f" satisfied condition "success or failure" May 9 22:06:42.361: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f container configmap-volume-test: STEP: delete the pod May 9 22:06:42.463: INFO: Waiting for pod pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f to disappear May 9 22:06:42.486: INFO: Pod pod-configmaps-b784862d-be4b-4951-888b-9c64e719c54f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:06:42.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6538" for this suite. • [SLOW TEST:6.282 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:06:42.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 9 22:06:46.636: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:06:46.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-750" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:06:46.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e7b215b8-4669-485d-a725-59ebf77b41ee STEP: Creating a pod to test consume configMaps May 9 22:06:46.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac" in namespace "configmap-5085" to be "success or failure" May 9 22:06:46.780: INFO: Pod "pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.410624ms May 9 22:06:48.784: INFO: Pod "pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006614457s May 9 22:06:50.789: INFO: Pod "pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011327827s STEP: Saw pod success May 9 22:06:50.789: INFO: Pod "pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac" satisfied condition "success or failure" May 9 22:06:50.792: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac container configmap-volume-test: STEP: delete the pod May 9 22:06:50.838: INFO: Waiting for pod pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac to disappear May 9 22:06:50.976: INFO: Pod pod-configmaps-644c804b-cd46-4196-84bc-06f51e0d4fac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:06:50.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5085" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3233,"failed":0} ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:06:50.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 9 22:06:51.115: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:06:51.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7984" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":206,"skipped":3233,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:06:51.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-c7962867-d61e-4f6f-bc35-a79e596ff5e2 in namespace container-probe-1130 May 9 22:06:55.314: INFO: Started pod liveness-c7962867-d61e-4f6f-bc35-a79e596ff5e2 in namespace container-probe-1130 STEP: checking the pod's current state and verifying that restartCount is present May 9 22:06:55.318: INFO: Initial restart count of pod liveness-c7962867-d61e-4f6f-bc35-a79e596ff5e2 is 0 May 9 22:07:17.367: INFO: Restart count of pod container-probe-1130/liveness-c7962867-d61e-4f6f-bc35-a79e596ff5e2 is now 1 (22.049443101s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:07:17.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1130" for this suite. • [SLOW TEST:26.180 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3236,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:07:17.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:07:17.494: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:07:21.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5961" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:07:21.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:07:21.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6271" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":209,"skipped":3279,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:07:21.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 9 22:07:21.905: INFO: Waiting up to 5m0s for pod "pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be" in namespace "emptydir-8358" to be "success or failure" May 9 22:07:21.913: INFO: Pod "pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483889ms May 9 22:07:23.986: INFO: Pod "pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08083731s May 9 22:07:26.000: INFO: Pod "pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be": Phase="Running", Reason="", readiness=true. Elapsed: 4.095577741s May 9 22:07:28.005: INFO: Pod "pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100413205s STEP: Saw pod success May 9 22:07:28.005: INFO: Pod "pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be" satisfied condition "success or failure" May 9 22:07:28.008: INFO: Trying to get logs from node jerma-worker pod pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be container test-container: STEP: delete the pod May 9 22:07:28.063: INFO: Waiting for pod pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be to disappear May 9 22:07:28.099: INFO: Pod pod-1c5bfdfc-97dd-4e4a-9d2e-be49027980be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:07:28.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8358" for this suite. • [SLOW TEST:6.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3287,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:07:28.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 9 22:07:28.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8725' May 9 22:07:31.130: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 22:07:31.130: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 9 22:07:35.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8725' May 9 22:07:35.342: INFO: stderr: "" May 9 22:07:35.342: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:07:35.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8725" for this suite. • [SLOW TEST:7.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":211,"skipped":3288,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:07:35.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 9 22:07:35.435: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 22:07:35.445: INFO: Waiting for terminating namespaces to be deleted... May 9 22:07:35.447: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 9 22:07:35.452: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:07:35.452: INFO: Container kube-proxy ready: true, restart count 0 May 9 22:07:35.452: INFO: pod-exec-websocket-70629134-caa0-4422-8281-5a6b38bdaea4 from pods-5961 started at 2020-05-09 22:07:17 +0000 UTC (1 container statuses recorded) May 9 22:07:35.452: INFO: Container main ready: true, restart count 0 May 9 22:07:35.452: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:07:35.452: INFO: Container kindnet-cni ready: true, restart count 0 May 9 22:07:35.452: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 9 22:07:35.457: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:07:35.457: INFO: Container kindnet-cni ready: true, restart count 0 May 9 22:07:35.457: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 9 22:07:35.457: INFO: Container kube-bench ready: false, restart count 0 May 9 22:07:35.457: INFO: e2e-test-httpd-deployment-594dddd44f-cpxdw from kubectl-8725 started at 2020-05-09 22:07:31 +0000 UTC (1 container statuses recorded) May 9 22:07:35.457: INFO: Container e2e-test-httpd-deployment ready: true, restart count 0 May 9 22:07:35.457: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:07:35.457: INFO: Container kube-proxy ready: true, restart count 0 May 9 22:07:35.457: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 9 22:07:35.457: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a806d189-d716-40b6-bbf1-8696ee4c5d13 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a806d189-d716-40b6-bbf1-8696ee4c5d13 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a806d189-d716-40b6-bbf1-8696ee4c5d13 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:07:44.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4015" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.901 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":212,"skipped":3302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:07:44.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-747036a4-9fd9-44c6-b0ef-6c5c08f61cfe STEP: Creating a pod to test consume secrets May 9 22:07:44.431: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d" in namespace "projected-796" to be "success or failure" May 9 22:07:44.480: INFO: Pod "pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.487054ms May 9 22:07:46.484: INFO: Pod "pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052748097s May 9 22:07:48.490: INFO: Pod "pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05820726s STEP: Saw pod success May 9 22:07:48.490: INFO: Pod "pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d" satisfied condition "success or failure" May 9 22:07:48.493: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d container secret-volume-test: STEP: delete the pod May 9 22:07:48.539: INFO: Waiting for pod pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d to disappear May 9 22:07:48.546: INFO: Pod pod-projected-secrets-69c82788-1f60-4254-94c3-c5aeb4ccc53d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:07:48.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-796" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3335,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:07:48.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-t94g STEP: Creating a pod to test atomic-volume-subpath May 9 22:07:48.699: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-t94g" in namespace "subpath-5859" to be "success or failure" May 9 22:07:48.705: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Pending", Reason="", readiness=false. Elapsed: 5.954889ms May 9 22:07:50.708: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008902879s May 9 22:07:52.725: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 4.026693355s May 9 22:07:54.729: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 6.030263684s May 9 22:07:56.734: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 8.034816643s May 9 22:07:58.738: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 10.039106427s May 9 22:08:00.743: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 12.043782491s May 9 22:08:02.747: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 14.048439678s May 9 22:08:04.751: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 16.052247335s May 9 22:08:06.754: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 18.055202239s May 9 22:08:08.758: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 20.059197347s May 9 22:08:10.761: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Running", Reason="", readiness=true. Elapsed: 22.062313549s May 9 22:08:12.798: INFO: Pod "pod-subpath-test-projected-t94g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.098721402s STEP: Saw pod success May 9 22:08:12.798: INFO: Pod "pod-subpath-test-projected-t94g" satisfied condition "success or failure" May 9 22:08:12.800: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-t94g container test-container-subpath-projected-t94g: STEP: delete the pod May 9 22:08:12.818: INFO: Waiting for pod pod-subpath-test-projected-t94g to disappear May 9 22:08:12.823: INFO: Pod pod-subpath-test-projected-t94g no longer exists STEP: Deleting pod pod-subpath-test-projected-t94g May 9 22:08:12.823: INFO: Deleting pod "pod-subpath-test-projected-t94g" in namespace "subpath-5859" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:08:12.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5859" for this suite. • [SLOW TEST:24.277 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":214,"skipped":3336,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:08:12.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 9 22:08:20.131: INFO: 10 pods remaining May 9 22:08:20.131: INFO: 0 pods has nil DeletionTimestamp May 9 22:08:20.131: INFO: May 9 22:08:21.611: INFO: 0 pods remaining May 9 22:08:21.611: INFO: 0 pods has nil DeletionTimestamp May 9 22:08:21.611: INFO: STEP: Gathering metrics W0509 22:08:22.597315 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 22:08:22.597: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:08:22.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7300" for this suite. • [SLOW TEST:10.309 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":215,"skipped":3341,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:08:23.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 22:08:25.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:08:27.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658905, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658905, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658905, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658904, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 22:08:29.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658905, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658905, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658905, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724658904, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:08:32.286: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:08:44.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3815" for this suite. STEP: Destroying namespace "webhook-3815-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.390 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":216,"skipped":3359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:08:44.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 9 22:08:44.578: INFO: Waiting up to 5m0s for pod "var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b" in namespace "var-expansion-3683" to be "success or failure" May 9 22:08:44.618: INFO: Pod "var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.646305ms May 9 22:08:46.622: INFO: Pod "var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044229983s May 9 22:08:48.626: INFO: Pod "var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048668036s STEP: Saw pod success May 9 22:08:48.626: INFO: Pod "var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b" satisfied condition "success or failure" May 9 22:08:48.630: INFO: Trying to get logs from node jerma-worker pod var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b container dapi-container: STEP: delete the pod May 9 22:08:48.684: INFO: Waiting for pod var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b to disappear May 9 22:08:48.688: INFO: Pod var-expansion-ccb0e6e9-8b15-46cb-8f07-e05928fe2b7b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:08:48.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3683" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:08:48.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 9 22:08:48.843: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:48.849: INFO: Number of nodes with available pods: 0 May 9 22:08:48.849: INFO: Node jerma-worker is running more than one daemon pod May 9 22:08:49.854: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:49.858: INFO: Number of nodes with available pods: 0 May 9 22:08:49.858: INFO: Node jerma-worker is running more than one daemon pod May 9 22:08:51.009: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:51.012: INFO: Number of nodes with available pods: 0 May 9 22:08:51.012: INFO: Node jerma-worker is running more than one daemon pod May 9 22:08:51.901: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:52.056: INFO: Number of nodes with available pods: 0 May 9 22:08:52.056: INFO: Node jerma-worker is running more than one daemon pod May 9 22:08:52.874: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:52.891: INFO: Number of nodes with available pods: 2 May 9 22:08:52.891: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 9 22:08:52.904: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:52.930: INFO: Number of nodes with available pods: 1 May 9 22:08:52.930: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:08:53.953: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:53.956: INFO: Number of nodes with available pods: 1 May 9 22:08:53.956: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:08:54.934: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:55.038: INFO: Number of nodes with available pods: 1 May 9 22:08:55.038: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:08:55.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:55.939: INFO: Number of nodes with available pods: 1 May 9 22:08:55.939: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:08:56.967: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:08:56.983: INFO: Number of nodes with available pods: 2 May 9 22:08:56.983: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-320, will wait for the garbage collector to delete the pods May 9 22:08:57.047: INFO: Deleting DaemonSet.extensions daemon-set took: 6.322163ms May 9 22:08:57.447: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.386333ms May 9 22:09:09.551: INFO: Number of nodes with available pods: 0 May 9 22:09:09.551: INFO: Number of running nodes: 0, number of available pods: 0 May 9 22:09:09.554: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-320/daemonsets","resourceVersion":"14813967"},"items":null} May 9 22:09:09.557: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-320/pods","resourceVersion":"14813967"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:09:09.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-320" for this suite. • [SLOW TEST:20.842 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":218,"skipped":3448,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:09:09.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-1541421c-ba1c-49cf-b618-3a07af1900a7 in namespace container-probe-549 May 9 22:09:13.726: INFO: Started pod test-webserver-1541421c-ba1c-49cf-b618-3a07af1900a7 in namespace container-probe-549 STEP: checking the pod's current state and verifying that restartCount is present May 9 22:09:13.730: INFO: Initial restart count of pod test-webserver-1541421c-ba1c-49cf-b618-3a07af1900a7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:13:14.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-549" for this suite. • [SLOW TEST:244.776 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:13:14.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e5a8e283-922f-4e3b-a53b-548577bc1004 STEP: Creating a pod to test consume secrets May 9 22:13:14.440: INFO: Waiting up to 5m0s for pod "pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67" in namespace "secrets-6979" to be "success or failure" May 9 22:13:14.678: INFO: Pod "pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67": Phase="Pending", Reason="", readiness=false. Elapsed: 238.557963ms May 9 22:13:16.682: INFO: Pod "pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24266028s May 9 22:13:18.686: INFO: Pod "pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246770265s STEP: Saw pod success May 9 22:13:18.686: INFO: Pod "pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67" satisfied condition "success or failure" May 9 22:13:18.690: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67 container secret-env-test: STEP: delete the pod May 9 22:13:18.818: INFO: Waiting for pod pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67 to disappear May 9 22:13:18.821: INFO: Pod pod-secrets-db6e5d51-3c06-4738-b4b1-8c8c3beecb67 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:13:18.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6979" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3492,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:13:18.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 9 22:13:18.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3721 -- logs-generator --log-lines-total 100 --run-duration 20s' May 9 22:13:18.980: INFO: stderr: "" May 9 22:13:18.980: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 9 22:13:18.980: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 9 22:13:18.980: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3721" to be "running and ready, or succeeded" May 9 22:13:19.002: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 21.786735ms May 9 22:13:21.006: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026195224s May 9 22:13:23.011: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.03069579s May 9 22:13:23.011: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 9 22:13:23.011: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 9 22:13:23.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3721' May 9 22:13:23.121: INFO: stderr: "" May 9 22:13:23.121: INFO: stdout: "I0509 22:13:21.273587 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/7l8 317\nI0509 22:13:21.473721 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/c2j9 518\nI0509 22:13:21.673788 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/c2b 525\nI0509 22:13:21.874038 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/jwjh 305\nI0509 22:13:22.073853 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/pr5 201\nI0509 22:13:22.273754 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/tdn 520\nI0509 22:13:22.473781 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/fzhw 521\nI0509 22:13:22.673971 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/qkv7 489\nI0509 22:13:22.873812 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/pdpb 540\nI0509 22:13:23.073772 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/g8bq 360\n" STEP: limiting log lines May 9 22:13:23.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3721 --tail=1' May 9 22:13:23.242: INFO: stderr: "" May 9 22:13:23.242: INFO: stdout: "I0509 22:13:23.073772 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/g8bq 360\n" May 9 22:13:23.242: INFO: got output "I0509 22:13:23.073772 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/g8bq 360\n" STEP: limiting log bytes May 9 22:13:23.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3721 --limit-bytes=1' May 9 22:13:23.351: INFO: stderr: "" May 9 22:13:23.351: INFO: stdout: "I" May 9 22:13:23.351: INFO: got output "I" STEP: exposing timestamps May 9 22:13:23.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3721 --tail=1 --timestamps' May 9 22:13:23.461: INFO: stderr: "" May 9 22:13:23.461: INFO: stdout: "2020-05-09T22:13:23.273933247Z I0509 22:13:23.273749 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/cff 514\n" May 9 22:13:23.461: INFO: got output "2020-05-09T22:13:23.273933247Z I0509 22:13:23.273749 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/cff 514\n" STEP: restricting to a time range May 9 22:13:25.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3721 --since=1s' May 9 22:13:26.071: INFO: stderr: "" May 9 22:13:26.071: INFO: stdout: "I0509 22:13:25.073856 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/hrl 495\nI0509 22:13:25.273766 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/p2t 447\nI0509 22:13:25.473793 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/hmb 327\nI0509 22:13:25.673787 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/l8p 318\nI0509 22:13:25.873795 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/5bz 221\n" May 9 22:13:26.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3721 --since=24h' May 9 22:13:26.178: INFO: stderr: "" May 9 22:13:26.178: INFO: stdout: "I0509 22:13:21.273587 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/7l8 317\nI0509 22:13:21.473721 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/c2j9 518\nI0509 22:13:21.673788 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/c2b 525\nI0509 22:13:21.874038 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/jwjh 305\nI0509 22:13:22.073853 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/pr5 201\nI0509 22:13:22.273754 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/tdn 520\nI0509 22:13:22.473781 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/fzhw 521\nI0509 22:13:22.673971 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/qkv7 489\nI0509 22:13:22.873812 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/pdpb 540\nI0509 22:13:23.073772 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/g8bq 360\nI0509 22:13:23.273749 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/cff 514\nI0509 22:13:23.473741 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/5gd 430\nI0509 22:13:23.673882 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/7j67 441\nI0509 22:13:23.873774 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/4l2 248\nI0509 22:13:24.073779 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/l78n 322\nI0509 22:13:24.273748 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/4r9 360\nI0509 22:13:24.473846 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/fzp9 338\nI0509 22:13:24.673770 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/nwd 264\nI0509 22:13:24.873765 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/zm8 490\nI0509 22:13:25.073856 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/hrl 495\nI0509 22:13:25.273766 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/p2t 447\nI0509 22:13:25.473793 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/hmb 327\nI0509 22:13:25.673787 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/l8p 318\nI0509 22:13:25.873795 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/5bz 221\nI0509 22:13:26.073743 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/qt65 250\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 9 22:13:26.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3721' May 9 22:13:39.279: INFO: stderr: "" May 9 22:13:39.279: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:13:39.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3721" for this suite. • [SLOW TEST:20.489 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":221,"skipped":3493,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:13:39.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 9 22:13:39.434: INFO: Waiting up to 5m0s for pod "downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a" in namespace "downward-api-5354" to be "success or failure" May 9 22:13:39.441: INFO: Pod "downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697773ms May 9 22:13:41.639: INFO: Pod "downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204788039s May 9 22:13:43.652: INFO: Pod "downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.218542508s STEP: Saw pod success May 9 22:13:43.653: INFO: Pod "downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a" satisfied condition "success or failure" May 9 22:13:43.656: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a container dapi-container: STEP: delete the pod May 9 22:13:43.743: INFO: Waiting for pod downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a to disappear May 9 22:13:43.778: INFO: Pod downward-api-b9d7fb97-9268-42a1-ae14-04e8c920266a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:13:43.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5354" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3495,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:13:43.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 22:13:44.334: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:13:46.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659224, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659224, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659224, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659224, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:13:49.407: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:13:49.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9173-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:13:50.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8922" for this suite. STEP: Destroying namespace "webhook-8922-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.838 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":223,"skipped":3516,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:13:50.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 22:13:56.827: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:13:56.830: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:13:56.833: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:13:56.840: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:13:56.846: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:13:56.848: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:13:56.854: INFO: Lookups using dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18 failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6155.svc.cluster.local jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local] May 9 22:14:01.863: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:01.866: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:01.880: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:01.883: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:01.965: INFO: Lookups using dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18 failed for: [wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local] May 9 22:14:06.866: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:06.869: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:06.884: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:06.887: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:06.894: INFO: Lookups using dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18 failed for: [wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local] May 9 22:14:11.863: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:11.866: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:11.879: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:11.882: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:11.888: INFO: Lookups using dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18 failed for: [wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local] May 9 22:14:16.865: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:16.868: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:16.880: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:16.883: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:16.889: INFO: Lookups using dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18 failed for: [wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local] May 9 22:14:21.887: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:21.890: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:21.903: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:21.905: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local from pod dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18: the server could not find the requested resource (get pods dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18) May 9 22:14:21.910: INFO: Lookups using dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18 failed for: [wheezy_udp@dns-test-service-2.dns-6155.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6155.svc.cluster.local jessie_udp@dns-test-service-2.dns-6155.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6155.svc.cluster.local] May 9 22:14:28.083: INFO: DNS probes using dns-6155/dns-test-3b7de17b-d337-4869-b49d-2a4619ff3d18 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:14:29.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6155" for this suite. • [SLOW TEST:39.348 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":224,"skipped":3527,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:14:29.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 9 22:14:30.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1673' May 9 22:14:30.315: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 22:14:30.315: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 9 22:14:31.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1673' May 9 22:14:32.531: INFO: stderr: "" May 9 22:14:32.531: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:14:32.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1673" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":225,"skipped":3536,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:14:32.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:14:32.870: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 9 22:14:34.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6545 create -f -' May 9 22:14:39.127: INFO: stderr: "" May 9 22:14:39.127: INFO: stdout: "e2e-test-crd-publish-openapi-7459-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 9 22:14:39.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6545 delete e2e-test-crd-publish-openapi-7459-crds test-cr' May 9 22:14:39.271: INFO: stderr: "" May 9 22:14:39.271: INFO: stdout: "e2e-test-crd-publish-openapi-7459-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 9 22:14:39.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6545 apply -f -' May 9 22:14:39.528: INFO: stderr: "" May 9 22:14:39.528: INFO: stdout: "e2e-test-crd-publish-openapi-7459-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 9 22:14:39.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6545 delete e2e-test-crd-publish-openapi-7459-crds test-cr' May 9 22:14:39.623: INFO: stderr: "" May 9 22:14:39.623: INFO: stdout: "e2e-test-crd-publish-openapi-7459-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 9 22:14:39.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7459-crds' May 9 22:14:39.871: INFO: stderr: "" May 9 22:14:39.871: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7459-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:14:42.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6545" for this suite. • [SLOW TEST:10.098 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":226,"skipped":3543,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:14:42.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-a07a3636-6c5b-493c-9c5c-dc24d7cfb851 STEP: Creating a pod to test consume secrets May 9 22:14:42.879: INFO: Waiting up to 5m0s for pod "pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494" in namespace "secrets-8547" to be "success or failure" May 9 22:14:42.884: INFO: Pod "pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494": Phase="Pending", Reason="", readiness=false. Elapsed: 5.489784ms May 9 22:14:44.998: INFO: Pod "pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119220291s May 9 22:14:47.002: INFO: Pod "pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494": Phase="Running", Reason="", readiness=true. Elapsed: 4.123316637s May 9 22:14:49.006: INFO: Pod "pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126579823s STEP: Saw pod success May 9 22:14:49.006: INFO: Pod "pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494" satisfied condition "success or failure" May 9 22:14:49.008: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494 container secret-volume-test: STEP: delete the pod May 9 22:14:49.030: INFO: Waiting for pod pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494 to disappear May 9 22:14:49.035: INFO: Pod pod-secrets-b4c566fe-8e53-4e03-8a56-9635e8bc6494 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:14:49.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8547" for this suite. • [SLOW TEST:6.278 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3561,"failed":0} [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:14:49.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:07.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8690" for this suite. • [SLOW TEST:18.079 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":228,"skipped":3561,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:07.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 9 22:15:08.153: INFO: Waiting up to 5m0s for pod "pod-642fd9f5-39e1-4313-9659-021ccb3881aa" in namespace "emptydir-8270" to be "success or failure" May 9 22:15:08.791: INFO: Pod "pod-642fd9f5-39e1-4313-9659-021ccb3881aa": Phase="Pending", Reason="", readiness=false. Elapsed: 637.356154ms May 9 22:15:10.839: INFO: Pod "pod-642fd9f5-39e1-4313-9659-021ccb3881aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.686109465s May 9 22:15:13.032: INFO: Pod "pod-642fd9f5-39e1-4313-9659-021ccb3881aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.878325371s STEP: Saw pod success May 9 22:15:13.032: INFO: Pod "pod-642fd9f5-39e1-4313-9659-021ccb3881aa" satisfied condition "success or failure" May 9 22:15:13.060: INFO: Trying to get logs from node jerma-worker pod pod-642fd9f5-39e1-4313-9659-021ccb3881aa container test-container: STEP: delete the pod May 9 22:15:13.200: INFO: Waiting for pod pod-642fd9f5-39e1-4313-9659-021ccb3881aa to disappear May 9 22:15:13.210: INFO: Pod pod-642fd9f5-39e1-4313-9659-021ccb3881aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:13.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8270" for this suite. • [SLOW TEST:6.099 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3571,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:13.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:29.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9874" for this suite. • [SLOW TEST:16.415 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":230,"skipped":3573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:29.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:15:34.293: INFO: Waiting up to 5m0s for pod "client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9" in namespace "pods-4496" to be "success or failure" May 9 22:15:34.302: INFO: Pod "client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069772ms May 9 22:15:36.583: INFO: Pod "client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289166101s May 9 22:15:38.587: INFO: Pod "client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293363434s STEP: Saw pod success May 9 22:15:38.587: INFO: Pod "client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9" satisfied condition "success or failure" May 9 22:15:38.612: INFO: Trying to get logs from node jerma-worker pod client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9 container env3cont: STEP: delete the pod May 9 22:15:38.684: INFO: Waiting for pod client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9 to disappear May 9 22:15:38.696: INFO: Pod client-envvars-ef6479fa-4f2b-4b2b-912b-8636705f1ed9 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:38.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4496" for this suite. • [SLOW TEST:9.270 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:38.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-31168906-e82d-4050-a2ae-411bba5ad9d5 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:39.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1350" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":232,"skipped":3669,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:39.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f05ad09a-f4dc-4011-ba2b-550ca4b748d4 STEP: Creating a pod to test consume configMaps May 9 22:15:39.099: INFO: Waiting up to 5m0s for pod "pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781" in namespace "configmap-413" to be "success or failure" May 9 22:15:39.105: INFO: Pod "pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588291ms May 9 22:15:41.223: INFO: Pod "pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124697311s May 9 22:15:43.227: INFO: Pod "pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127914427s STEP: Saw pod success May 9 22:15:43.227: INFO: Pod "pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781" satisfied condition "success or failure" May 9 22:15:43.228: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781 container configmap-volume-test: STEP: delete the pod May 9 22:15:43.271: INFO: Waiting for pod pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781 to disappear May 9 22:15:43.275: INFO: Pod pod-configmaps-caa4e46e-859c-4ab5-9df6-320a7d63c781 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:43.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-413" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3687,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:43.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 9 22:15:43.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332" in namespace "downward-api-4095" to be "success or failure" May 9 22:15:43.365: INFO: Pod "downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332": Phase="Pending", Reason="", readiness=false. Elapsed: 3.080386ms May 9 22:15:45.403: INFO: Pod "downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040795333s May 9 22:15:47.407: INFO: Pod "downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332": Phase="Running", Reason="", readiness=true. Elapsed: 4.044968067s May 9 22:15:49.411: INFO: Pod "downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049436441s STEP: Saw pod success May 9 22:15:49.411: INFO: Pod "downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332" satisfied condition "success or failure" May 9 22:15:49.415: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332 container client-container: STEP: delete the pod May 9 22:15:49.443: INFO: Waiting for pod downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332 to disappear May 9 22:15:49.449: INFO: Pod downwardapi-volume-802b27f9-e09a-4a5b-96cd-80ac086ba332 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:49.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4095" for this suite. • [SLOW TEST:6.248 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3705,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:49.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:49.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5250" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":235,"skipped":3723,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:49.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 9 22:15:54.260: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a15c94cb-a5fe-4208-a36f-40488e8369d0" May 9 22:15:54.260: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a15c94cb-a5fe-4208-a36f-40488e8369d0" in namespace "pods-8976" to be "terminated due to deadline exceeded" May 9 22:15:54.277: INFO: Pod "pod-update-activedeadlineseconds-a15c94cb-a5fe-4208-a36f-40488e8369d0": Phase="Running", Reason="", readiness=true. Elapsed: 16.83688ms May 9 22:15:56.282: INFO: Pod "pod-update-activedeadlineseconds-a15c94cb-a5fe-4208-a36f-40488e8369d0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021773823s May 9 22:15:56.282: INFO: Pod "pod-update-activedeadlineseconds-a15c94cb-a5fe-4208-a36f-40488e8369d0" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:15:56.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8976" for this suite. • [SLOW TEST:6.692 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3732,"failed":0} SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:15:56.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-9819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9819 to expose endpoints map[] May 9 22:15:56.451: INFO: Get endpoints failed (10.718012ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 9 22:15:57.454: INFO: successfully validated that service endpoint-test2 in namespace services-9819 exposes endpoints map[] (1.013563188s elapsed) STEP: Creating pod pod1 in namespace services-9819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9819 to expose endpoints map[pod1:[80]] May 9 22:16:01.553: INFO: successfully validated that service endpoint-test2 in namespace services-9819 exposes endpoints map[pod1:[80]] (4.093546421s elapsed) STEP: Creating pod pod2 in namespace services-9819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9819 to expose endpoints map[pod1:[80] pod2:[80]] May 9 22:16:04.794: INFO: successfully validated that service endpoint-test2 in namespace services-9819 exposes endpoints map[pod1:[80] pod2:[80]] (3.235947806s elapsed) STEP: Deleting pod pod1 in namespace services-9819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9819 to expose endpoints map[pod2:[80]] May 9 22:16:05.841: INFO: successfully validated that service endpoint-test2 in namespace services-9819 exposes endpoints map[pod2:[80]] (1.042212612s elapsed) STEP: Deleting pod pod2 in namespace services-9819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9819 to expose endpoints map[] May 9 22:16:06.855: INFO: successfully validated that service endpoint-test2 in namespace services-9819 exposes endpoints map[] (1.009156229s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:16:07.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9819" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.841 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":237,"skipped":3734,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:16:07.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 9 22:16:07.235: INFO: Waiting up to 5m0s for pod "var-expansion-fcb168c6-92f9-4343-be83-852d52843b40" in namespace "var-expansion-4020" to be "success or failure" May 9 22:16:07.237: INFO: Pod "var-expansion-fcb168c6-92f9-4343-be83-852d52843b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.805679ms May 9 22:16:09.259: INFO: Pod "var-expansion-fcb168c6-92f9-4343-be83-852d52843b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024931947s May 9 22:16:11.263: INFO: Pod "var-expansion-fcb168c6-92f9-4343-be83-852d52843b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028697872s STEP: Saw pod success May 9 22:16:11.263: INFO: Pod "var-expansion-fcb168c6-92f9-4343-be83-852d52843b40" satisfied condition "success or failure" May 9 22:16:11.266: INFO: Trying to get logs from node jerma-worker pod var-expansion-fcb168c6-92f9-4343-be83-852d52843b40 container dapi-container: STEP: delete the pod May 9 22:16:11.291: INFO: Waiting for pod var-expansion-fcb168c6-92f9-4343-be83-852d52843b40 to disappear May 9 22:16:11.295: INFO: Pod var-expansion-fcb168c6-92f9-4343-be83-852d52843b40 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:16:11.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4020" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3745,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:16:11.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:16:11.418: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d717acd7-ea50-43bb-b0bb-9eb5da81a910" in namespace "security-context-test-6075" to be "success or failure" May 9 22:16:11.463: INFO: Pod "busybox-user-65534-d717acd7-ea50-43bb-b0bb-9eb5da81a910": Phase="Pending", Reason="", readiness=false. Elapsed: 45.373609ms May 9 22:16:13.467: INFO: Pod "busybox-user-65534-d717acd7-ea50-43bb-b0bb-9eb5da81a910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049213438s May 9 22:16:15.471: INFO: Pod "busybox-user-65534-d717acd7-ea50-43bb-b0bb-9eb5da81a910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05351711s May 9 22:16:15.471: INFO: Pod "busybox-user-65534-d717acd7-ea50-43bb-b0bb-9eb5da81a910" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:16:15.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6075" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3757,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:16:15.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 9 22:16:15.608: INFO: Waiting up to 5m0s for pod "pod-1b5ae24b-c30d-404e-8fc5-817d729337bb" in namespace "emptydir-6637" to be "success or failure" May 9 22:16:15.654: INFO: Pod "pod-1b5ae24b-c30d-404e-8fc5-817d729337bb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.910969ms May 9 22:16:17.658: INFO: Pod "pod-1b5ae24b-c30d-404e-8fc5-817d729337bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049784134s May 9 22:16:19.663: INFO: Pod "pod-1b5ae24b-c30d-404e-8fc5-817d729337bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054152063s STEP: Saw pod success May 9 22:16:19.663: INFO: Pod "pod-1b5ae24b-c30d-404e-8fc5-817d729337bb" satisfied condition "success or failure" May 9 22:16:19.665: INFO: Trying to get logs from node jerma-worker2 pod pod-1b5ae24b-c30d-404e-8fc5-817d729337bb container test-container: STEP: delete the pod May 9 22:16:19.704: INFO: Waiting for pod pod-1b5ae24b-c30d-404e-8fc5-817d729337bb to disappear May 9 22:16:19.720: INFO: Pod pod-1b5ae24b-c30d-404e-8fc5-817d729337bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:16:19.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6637" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3797,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:16:19.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:16:54.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2702" for this suite. • [SLOW TEST:34.827 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:16:54.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 9 22:16:54.623: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:02.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3331" for this suite. • [SLOW TEST:7.597 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":242,"skipped":3830,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:02.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 9 22:17:02.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3177' May 9 22:17:02.531: INFO: stderr: "" May 9 22:17:02.531: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 9 22:17:03.535: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:17:03.535: INFO: Found 0 / 1 May 9 22:17:04.536: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:17:04.536: INFO: Found 0 / 1 May 9 22:17:05.536: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:17:05.536: INFO: Found 1 / 1 May 9 22:17:05.536: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 9 22:17:05.539: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:17:05.539: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 9 22:17:05.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-bjnk7 --namespace=kubectl-3177 -p {"metadata":{"annotations":{"x":"y"}}}' May 9 22:17:05.645: INFO: stderr: "" May 9 22:17:05.646: INFO: stdout: "pod/agnhost-master-bjnk7 patched\n" STEP: checking annotations May 9 22:17:05.704: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:17:05.704: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:05.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3177" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":243,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:05.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 9 22:17:09.903: INFO: &Pod{ObjectMeta:{send-events-5990d03b-d398-4721-a6e1-44cca632ec4d events-1035 /api/v1/namespaces/events-1035/pods/send-events-5990d03b-d398-4721-a6e1-44cca632ec4d 1a70848f-1579-40de-a575-2a0f55a2b97c 14816257 0 2020-05-09 22:17:05 +0000 UTC map[name:foo time:781078117] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svhsk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svhsk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svhsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 22:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 22:17:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 22:17:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-09 22:17:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.138,StartTime:2020-05-09 22:17:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-09 22:17:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a9c6c6187538a1f206598def4a470fc22ee42eaef64576817497aae3e678dc84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 9 22:17:11.908: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 9 22:17:13.911: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:13.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1035" for this suite. • [SLOW TEST:8.321 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":244,"skipped":3875,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:14.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 22:17:14.705: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:17:16.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659434, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659434, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659434, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659434, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:17:19.818: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:29.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3490" for this suite. STEP: Destroying namespace "webhook-3490-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.117 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":245,"skipped":3894,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:30.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:34.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7980" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3902,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:34.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:17:34.376: INFO: Creating ReplicaSet my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3 May 9 22:17:34.422: INFO: Pod name my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3: Found 0 pods out of 1 May 9 22:17:39.425: INFO: Pod name my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3: Found 1 pods out of 1 May 9 22:17:39.425: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3" is running May 9 22:17:39.430: INFO: Pod "my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3-x4v5q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 22:17:34 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 22:17:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 22:17:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 22:17:34 +0000 UTC Reason: Message:}]) May 9 22:17:39.430: INFO: Trying to dial the pod May 9 22:17:44.488: INFO: Controller my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3: Got expected result from replica 1 [my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3-x4v5q]: "my-hostname-basic-8ffc24a5-da88-40a3-bb18-5532f860feb3-x4v5q", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:44.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6109" for this suite. • [SLOW TEST:10.359 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":247,"skipped":3907,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:44.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 9 22:17:49.036: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:49.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6549" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":3912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:49.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 9 22:17:49.437: INFO: Waiting up to 5m0s for pod "pod-774402f7-dd0d-4c80-962b-54d051c0cc97" in namespace "emptydir-7900" to be "success or failure" May 9 22:17:49.476: INFO: Pod "pod-774402f7-dd0d-4c80-962b-54d051c0cc97": Phase="Pending", Reason="", readiness=false. Elapsed: 39.163164ms May 9 22:17:51.479: INFO: Pod "pod-774402f7-dd0d-4c80-962b-54d051c0cc97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04249489s May 9 22:17:53.484: INFO: Pod "pod-774402f7-dd0d-4c80-962b-54d051c0cc97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046989958s STEP: Saw pod success May 9 22:17:53.484: INFO: Pod "pod-774402f7-dd0d-4c80-962b-54d051c0cc97" satisfied condition "success or failure" May 9 22:17:53.487: INFO: Trying to get logs from node jerma-worker pod pod-774402f7-dd0d-4c80-962b-54d051c0cc97 container test-container: STEP: delete the pod May 9 22:17:53.521: INFO: Waiting for pod pod-774402f7-dd0d-4c80-962b-54d051c0cc97 to disappear May 9 22:17:53.538: INFO: Pod pod-774402f7-dd0d-4c80-962b-54d051c0cc97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:53.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7900" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":3963,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:53.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-751f5107-8f4f-4759-9e67-4799469d2407 STEP: Creating a pod to test consume configMaps May 9 22:17:53.693: INFO: Waiting up to 5m0s for pod "pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c" in namespace "configmap-8641" to be "success or failure" May 9 22:17:53.699: INFO: Pod "pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.685603ms May 9 22:17:55.719: INFO: Pod "pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025474456s May 9 22:17:57.724: INFO: Pod "pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030073546s STEP: Saw pod success May 9 22:17:57.724: INFO: Pod "pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c" satisfied condition "success or failure" May 9 22:17:57.726: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c container configmap-volume-test: STEP: delete the pod May 9 22:17:57.789: INFO: Waiting for pod pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c to disappear May 9 22:17:57.835: INFO: Pod pod-configmaps-b800e4f8-e9e9-494f-8541-9bdfdddf7f6c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:17:57.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8641" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3963,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:17:57.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:17:57.927: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 9 22:18:00.033: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:01.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1854" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":251,"skipped":3980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:01.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 9 22:18:07.364: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2903 PodName:pod-sharedvolume-4224c028-c878-4ab5-a790-9d4b23fd5fc4 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 22:18:07.365: INFO: >>> kubeConfig: /root/.kube/config I0509 22:18:07.450715 7 log.go:172] (0xc0008fe420) (0xc000ce3680) Create stream I0509 22:18:07.450741 7 log.go:172] (0xc0008fe420) (0xc000ce3680) Stream added, broadcasting: 1 I0509 22:18:07.452200 7 log.go:172] (0xc0008fe420) Reply frame received for 1 I0509 22:18:07.452240 7 log.go:172] (0xc0008fe420) (0xc000ce3a40) Create stream I0509 22:18:07.452257 7 log.go:172] (0xc0008fe420) (0xc000ce3a40) Stream added, broadcasting: 3 I0509 22:18:07.452915 7 log.go:172] (0xc0008fe420) Reply frame received for 3 I0509 22:18:07.452933 7 log.go:172] (0xc0008fe420) (0xc000ce3f40) Create stream I0509 22:18:07.452944 7 log.go:172] (0xc0008fe420) (0xc000ce3f40) Stream added, broadcasting: 5 I0509 22:18:07.453813 7 log.go:172] (0xc0008fe420) Reply frame received for 5 I0509 22:18:07.516315 7 log.go:172] (0xc0008fe420) Data frame received for 5 I0509 22:18:07.516346 7 log.go:172] (0xc000ce3f40) (5) Data frame handling I0509 22:18:07.516366 7 log.go:172] (0xc0008fe420) Data frame received for 3 I0509 22:18:07.516378 7 log.go:172] (0xc000ce3a40) (3) Data frame handling I0509 22:18:07.516390 7 log.go:172] (0xc000ce3a40) (3) Data frame sent I0509 22:18:07.516401 7 log.go:172] (0xc0008fe420) Data frame received for 3 I0509 22:18:07.516411 7 log.go:172] (0xc000ce3a40) (3) Data frame handling I0509 22:18:07.517654 7 log.go:172] (0xc0008fe420) Data frame received for 1 I0509 22:18:07.517670 7 log.go:172] (0xc000ce3680) (1) Data frame handling I0509 22:18:07.517683 7 log.go:172] (0xc000ce3680) (1) Data frame sent I0509 22:18:07.517694 7 log.go:172] (0xc0008fe420) (0xc000ce3680) Stream removed, broadcasting: 1 I0509 22:18:07.517735 7 log.go:172] (0xc0008fe420) (0xc000ce3680) Stream removed, broadcasting: 1 I0509 22:18:07.517745 7 log.go:172] (0xc0008fe420) (0xc000ce3a40) Stream removed, broadcasting: 3 I0509 22:18:07.517752 7 log.go:172] (0xc0008fe420) (0xc000ce3f40) Stream removed, broadcasting: 5 May 9 22:18:07.517: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:07.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0509 22:18:07.517824 7 log.go:172] (0xc0008fe420) Go away received STEP: Destroying namespace "emptydir-2903" for this suite. • [SLOW TEST:6.541 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":252,"skipped":4006,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:07.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 9 22:18:12.270: INFO: Successfully updated pod "annotationupdate370a9d8b-ae7c-4474-9cdb-4ce0985ffdd2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:14.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8459" for this suite. • [SLOW TEST:6.734 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4011,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:14.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a02ac599-f09e-4aa4-9ba9-ae0748373ee9 STEP: Creating a pod to test consume secrets May 9 22:18:14.414: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c" in namespace "projected-8407" to be "success or failure" May 9 22:18:14.465: INFO: Pod "pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.363881ms May 9 22:18:16.470: INFO: Pod "pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055822404s May 9 22:18:18.473: INFO: Pod "pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059102474s STEP: Saw pod success May 9 22:18:18.473: INFO: Pod "pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c" satisfied condition "success or failure" May 9 22:18:18.476: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c container projected-secret-volume-test: STEP: delete the pod May 9 22:18:18.622: INFO: Waiting for pod pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c to disappear May 9 22:18:18.677: INFO: Pod pod-projected-secrets-c3c533fe-e51c-40e1-8757-71187c90405c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:18.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8407" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4044,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:18.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 22:18:19.273: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:18:21.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659499, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659499, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659499, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659499, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:18:24.435: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:24.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2056" for this suite. STEP: Destroying namespace "webhook-2056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.131 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":255,"skipped":4059,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:24.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-303d60d4-a22e-4257-b726-8c90edc4fede STEP: Creating a pod to test consume secrets May 9 22:18:24.942: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e" in namespace "projected-2815" to be "success or failure" May 9 22:18:25.004: INFO: Pod "pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e": Phase="Pending", Reason="", readiness=false. Elapsed: 61.248025ms May 9 22:18:27.046: INFO: Pod "pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103721715s May 9 22:18:29.051: INFO: Pod "pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108020461s STEP: Saw pod success May 9 22:18:29.051: INFO: Pod "pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e" satisfied condition "success or failure" May 9 22:18:29.053: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e container projected-secret-volume-test: STEP: delete the pod May 9 22:18:29.090: INFO: Waiting for pod pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e to disappear May 9 22:18:29.102: INFO: Pod pod-projected-secrets-0597d6db-f2c9-4dd7-b16e-5a511ed6f57e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:29.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2815" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:29.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 9 22:18:29.188: INFO: Waiting up to 5m0s for pod "pod-4124b0b2-a186-4f7a-9427-13917dee2fe9" in namespace "emptydir-1842" to be "success or failure" May 9 22:18:29.192: INFO: Pod "pod-4124b0b2-a186-4f7a-9427-13917dee2fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.860861ms May 9 22:18:31.196: INFO: Pod "pod-4124b0b2-a186-4f7a-9427-13917dee2fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007861934s May 9 22:18:33.200: INFO: Pod "pod-4124b0b2-a186-4f7a-9427-13917dee2fe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011706111s STEP: Saw pod success May 9 22:18:33.200: INFO: Pod "pod-4124b0b2-a186-4f7a-9427-13917dee2fe9" satisfied condition "success or failure" May 9 22:18:33.202: INFO: Trying to get logs from node jerma-worker pod pod-4124b0b2-a186-4f7a-9427-13917dee2fe9 container test-container: STEP: delete the pod May 9 22:18:33.281: INFO: Waiting for pod pod-4124b0b2-a186-4f7a-9427-13917dee2fe9 to disappear May 9 22:18:33.288: INFO: Pod pod-4124b0b2-a186-4f7a-9427-13917dee2fe9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:33.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1842" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4129,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:33.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:18:40.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-848" for this suite. • [SLOW TEST:7.217 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":258,"skipped":4142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:18:40.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:18:40.665: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 9 22:18:40.691: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:40.782: INFO: Number of nodes with available pods: 0 May 9 22:18:40.782: INFO: Node jerma-worker is running more than one daemon pod May 9 22:18:41.788: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:41.791: INFO: Number of nodes with available pods: 0 May 9 22:18:41.791: INFO: Node jerma-worker is running more than one daemon pod May 9 22:18:42.789: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:42.796: INFO: Number of nodes with available pods: 0 May 9 22:18:42.796: INFO: Node jerma-worker is running more than one daemon pod May 9 22:18:43.787: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:43.813: INFO: Number of nodes with available pods: 0 May 9 22:18:43.813: INFO: Node jerma-worker is running more than one daemon pod May 9 22:18:44.787: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:44.790: INFO: Number of nodes with available pods: 2 May 9 22:18:44.790: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 9 22:18:44.822: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:44.822: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:44.841: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:45.849: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:45.849: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:45.853: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:46.847: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:46.847: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:46.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:47.847: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:47.847: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:47.852: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:48.845: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:48.845: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:48.845: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:48.849: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:49.846: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:49.846: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:49.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:49.851: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:50.846: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:50.847: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:50.847: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:50.851: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:51.847: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:51.847: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:51.847: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:51.866: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:52.847: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:52.847: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:52.847: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:52.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:53.846: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:53.846: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:53.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:53.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:54.846: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:54.846: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:54.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:54.851: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:55.846: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:55.846: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:55.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:55.849: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:56.846: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:56.846: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:56.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:56.851: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:57.846: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:57.846: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:57.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:57.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:58.845: INFO: Wrong image for pod: daemon-set-2nmhh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:58.845: INFO: Pod daemon-set-2nmhh is not available May 9 22:18:58.845: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:58.849: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:18:59.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:18:59.846: INFO: Pod daemon-set-b2lcm is not available May 9 22:18:59.849: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:00.845: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:00.845: INFO: Pod daemon-set-b2lcm is not available May 9 22:19:00.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:01.858: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:01.858: INFO: Pod daemon-set-b2lcm is not available May 9 22:19:01.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:02.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:02.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:03.847: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:03.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:04.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:04.846: INFO: Pod daemon-set-82df9 is not available May 9 22:19:04.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:05.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:05.846: INFO: Pod daemon-set-82df9 is not available May 9 22:19:05.851: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:06.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:06.846: INFO: Pod daemon-set-82df9 is not available May 9 22:19:06.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:07.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:07.846: INFO: Pod daemon-set-82df9 is not available May 9 22:19:07.849: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:08.846: INFO: Wrong image for pod: daemon-set-82df9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 9 22:19:08.846: INFO: Pod daemon-set-82df9 is not available May 9 22:19:08.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:09.846: INFO: Pod daemon-set-vcfmk is not available May 9 22:19:09.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 9 22:19:09.853: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:09.856: INFO: Number of nodes with available pods: 1 May 9 22:19:09.856: INFO: Node jerma-worker is running more than one daemon pod May 9 22:19:10.861: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:10.864: INFO: Number of nodes with available pods: 1 May 9 22:19:10.864: INFO: Node jerma-worker is running more than one daemon pod May 9 22:19:11.860: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:11.864: INFO: Number of nodes with available pods: 1 May 9 22:19:11.864: INFO: Node jerma-worker is running more than one daemon pod May 9 22:19:12.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 22:19:12.866: INFO: Number of nodes with available pods: 2 May 9 22:19:12.866: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-901, will wait for the garbage collector to delete the pods May 9 22:19:12.940: INFO: Deleting DaemonSet.extensions daemon-set took: 6.818519ms May 9 22:19:13.241: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.256457ms May 9 22:19:19.344: INFO: Number of nodes with available pods: 0 May 9 22:19:19.344: INFO: Number of running nodes: 0, number of available pods: 0 May 9 22:19:19.348: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-901/daemonsets","resourceVersion":"14817229"},"items":null} May 9 22:19:19.350: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-901/pods","resourceVersion":"14817229"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:19:19.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-901" for this suite. • [SLOW TEST:38.895 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":259,"skipped":4186,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:19:19.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 9 22:19:20.151: INFO: Pod name wrapped-volume-race-53dfc198-3add-4f17-b280-d24b0a4adafd: Found 0 pods out of 5 May 9 22:19:25.159: INFO: Pod name wrapped-volume-race-53dfc198-3add-4f17-b280-d24b0a4adafd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-53dfc198-3add-4f17-b280-d24b0a4adafd in namespace emptydir-wrapper-9564, will wait for the garbage collector to delete the pods May 9 22:19:39.276: INFO: Deleting ReplicationController wrapped-volume-race-53dfc198-3add-4f17-b280-d24b0a4adafd took: 5.576279ms May 9 22:19:39.577: INFO: Terminating ReplicationController wrapped-volume-race-53dfc198-3add-4f17-b280-d24b0a4adafd pods took: 300.269998ms STEP: Creating RC which spawns configmap-volume pods May 9 22:19:50.639: INFO: Pod name wrapped-volume-race-aeffeac3-01fe-4507-9cb7-fd2fe2539ad0: Found 0 pods out of 5 May 9 22:19:55.652: INFO: Pod name wrapped-volume-race-aeffeac3-01fe-4507-9cb7-fd2fe2539ad0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aeffeac3-01fe-4507-9cb7-fd2fe2539ad0 in namespace emptydir-wrapper-9564, will wait for the garbage collector to delete the pods May 9 22:20:09.747: INFO: Deleting ReplicationController wrapped-volume-race-aeffeac3-01fe-4507-9cb7-fd2fe2539ad0 took: 8.032934ms May 9 22:20:10.048: INFO: Terminating ReplicationController wrapped-volume-race-aeffeac3-01fe-4507-9cb7-fd2fe2539ad0 pods took: 300.269903ms STEP: Creating RC which spawns configmap-volume pods May 9 22:20:20.273: INFO: Pod name wrapped-volume-race-472e089c-a90c-4a7c-9c9e-d39095dd71de: Found 0 pods out of 5 May 9 22:20:25.280: INFO: Pod name wrapped-volume-race-472e089c-a90c-4a7c-9c9e-d39095dd71de: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-472e089c-a90c-4a7c-9c9e-d39095dd71de in namespace emptydir-wrapper-9564, will wait for the garbage collector to delete the pods May 9 22:20:39.356: INFO: Deleting ReplicationController wrapped-volume-race-472e089c-a90c-4a7c-9c9e-d39095dd71de took: 6.694205ms May 9 22:20:39.656: INFO: Terminating ReplicationController wrapped-volume-race-472e089c-a90c-4a7c-9c9e-d39095dd71de pods took: 300.246863ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:20:51.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9564" for this suite. • [SLOW TEST:91.785 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":260,"skipped":4196,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:20:51.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 9 22:20:51.273: INFO: Waiting up to 5m0s for pod "downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92" in namespace "downward-api-7705" to be "success or failure" May 9 22:20:51.314: INFO: Pod "downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92": Phase="Pending", Reason="", readiness=false. Elapsed: 40.9504ms May 9 22:20:53.319: INFO: Pod "downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045419641s May 9 22:20:55.323: INFO: Pod "downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049353594s STEP: Saw pod success May 9 22:20:55.323: INFO: Pod "downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92" satisfied condition "success or failure" May 9 22:20:55.326: INFO: Trying to get logs from node jerma-worker2 pod downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92 container dapi-container: STEP: delete the pod May 9 22:20:55.371: INFO: Waiting for pod downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92 to disappear May 9 22:20:55.380: INFO: Pod downward-api-15fc64df-2db0-4e82-8aca-102c01edaf92 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:20:55.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7705" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4201,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:20:55.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2168.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2168.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 22:21:03.538: INFO: DNS probes using dns-test-f9cfb0f8-bfcc-4dd1-9ed3-da07aabb3ea4 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2168.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2168.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 22:21:09.658: INFO: File wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:09.661: INFO: File jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:09.661: INFO: Lookups using dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 failed for: [wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local] May 9 22:21:14.666: INFO: File wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:14.669: INFO: File jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:14.669: INFO: Lookups using dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 failed for: [wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local] May 9 22:21:19.666: INFO: File wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:19.669: INFO: File jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:19.669: INFO: Lookups using dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 failed for: [wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local] May 9 22:21:24.666: INFO: File wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:24.670: INFO: File jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:24.670: INFO: Lookups using dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 failed for: [wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local] May 9 22:21:29.666: INFO: File wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:29.668: INFO: File jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local from pod dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 9 22:21:29.668: INFO: Lookups using dns-2168/dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 failed for: [wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local] May 9 22:21:34.671: INFO: DNS probes using dns-test-0482378e-9938-4197-a66a-5fa4c81ab8f4 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2168.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2168.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2168.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2168.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 22:21:43.344: INFO: DNS probes using dns-test-b95e344b-209a-43a4-b0fc-c2c02435d80a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:21:43.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2168" for this suite. • [SLOW TEST:48.126 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":262,"skipped":4204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:21:43.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 22:21:44.360: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:21:46.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 22:21:48.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:21:51.444: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:21:51.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9330" for this suite. STEP: Destroying namespace "webhook-9330-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.169 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":263,"skipped":4247,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:21:51.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-5338 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5338 STEP: Deleting pre-stop pod May 9 22:22:04.828: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:22:04.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5338" for this suite. • [SLOW TEST:13.244 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":264,"skipped":4248,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:22:04.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 9 22:22:15.462: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 9 22:22:15.504: INFO: Pod pod-with-prestop-http-hook still exists May 9 22:22:17.504: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 9 22:22:17.508: INFO: Pod pod-with-prestop-http-hook still exists May 9 22:22:19.504: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 9 22:22:19.508: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:22:19.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1627" for this suite. • [SLOW TEST:14.586 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4256,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:22:19.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 9 22:22:19.611: INFO: Waiting up to 5m0s for pod "pod-708c48a1-487c-43e1-af65-b20a61034cc5" in namespace "emptydir-4320" to be "success or failure" May 9 22:22:19.636: INFO: Pod "pod-708c48a1-487c-43e1-af65-b20a61034cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.476246ms May 9 22:22:21.640: INFO: Pod "pod-708c48a1-487c-43e1-af65-b20a61034cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028838117s May 9 22:22:23.645: INFO: Pod "pod-708c48a1-487c-43e1-af65-b20a61034cc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033127572s STEP: Saw pod success May 9 22:22:23.645: INFO: Pod "pod-708c48a1-487c-43e1-af65-b20a61034cc5" satisfied condition "success or failure" May 9 22:22:23.648: INFO: Trying to get logs from node jerma-worker pod pod-708c48a1-487c-43e1-af65-b20a61034cc5 container test-container: STEP: delete the pod May 9 22:22:23.687: INFO: Waiting for pod pod-708c48a1-487c-43e1-af65-b20a61034cc5 to disappear May 9 22:22:23.714: INFO: Pod pod-708c48a1-487c-43e1-af65-b20a61034cc5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:22:23.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4320" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:22:23.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f354cf7b-f88b-4830-bde5-1c68cb277ddd STEP: Creating a pod to test consume secrets May 9 22:22:23.857: INFO: Waiting up to 5m0s for pod "pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd" in namespace "secrets-7540" to be "success or failure" May 9 22:22:23.861: INFO: Pod "pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252839ms May 9 22:22:26.541: INFO: Pod "pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.684043724s May 9 22:22:28.546: INFO: Pod "pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689057796s May 9 22:22:30.550: INFO: Pod "pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.693097993s STEP: Saw pod success May 9 22:22:30.550: INFO: Pod "pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd" satisfied condition "success or failure" May 9 22:22:30.554: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd container secret-volume-test: STEP: delete the pod May 9 22:22:30.628: INFO: Waiting for pod pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd to disappear May 9 22:22:30.641: INFO: Pod pod-secrets-9f23d187-57e8-4403-9125-fe021133d7fd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:22:30.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7540" for this suite. STEP: Destroying namespace "secret-namespace-1720" for this suite. • [SLOW TEST:6.932 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4304,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:22:30.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 9 22:22:30.707: INFO: Waiting up to 5m0s for pod "downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c" in namespace "downward-api-9699" to be "success or failure" May 9 22:22:30.710: INFO: Pod "downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.6364ms May 9 22:22:32.714: INFO: Pod "downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006547683s May 9 22:22:34.718: INFO: Pod "downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010611397s STEP: Saw pod success May 9 22:22:34.718: INFO: Pod "downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c" satisfied condition "success or failure" May 9 22:22:34.724: INFO: Trying to get logs from node jerma-worker2 pod downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c container dapi-container: STEP: delete the pod May 9 22:22:34.745: INFO: Waiting for pod downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c to disappear May 9 22:22:34.750: INFO: Pod downward-api-ae5509b3-b087-4e1b-8c91-b60bbeff909c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:22:34.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9699" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4305,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:22:34.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:22:50.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4358" for this suite. • [SLOW TEST:16.243 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":269,"skipped":4323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:22:51.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 9 22:22:51.085: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 22:22:51.097: INFO: Waiting for terminating namespaces to be deleted... May 9 22:22:51.100: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 9 22:22:51.105: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:22:51.105: INFO: Container kindnet-cni ready: true, restart count 0 May 9 22:22:51.105: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:22:51.105: INFO: Container kube-proxy ready: true, restart count 0 May 9 22:22:51.105: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 9 22:22:51.110: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:22:51.110: INFO: Container kube-proxy ready: true, restart count 0 May 9 22:22:51.110: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 9 22:22:51.110: INFO: Container kube-hunter ready: false, restart count 0 May 9 22:22:51.110: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 9 22:22:51.110: INFO: Container kindnet-cni ready: true, restart count 0 May 9 22:22:51.110: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 9 22:22:51.110: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-227e977c-e6b4-4ffc-9eaf-70dd8f9525c7 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-227e977c-e6b4-4ffc-9eaf-70dd8f9525c7 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-227e977c-e6b4-4ffc-9eaf-70dd8f9525c7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:23:07.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2679" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.381 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":270,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:23:07.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:23:07.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6743" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":271,"skipped":4384,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:23:07.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4347/secret-test-0b899c51-9f8e-4bed-a23c-dd6ed82dcd92 STEP: Creating a pod to test consume secrets May 9 22:23:07.531: INFO: Waiting up to 5m0s for pod "pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3" in namespace "secrets-4347" to be "success or failure" May 9 22:23:07.534: INFO: Pod "pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504271ms May 9 22:23:09.539: INFO: Pod "pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007744905s May 9 22:23:11.543: INFO: Pod "pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012575687s STEP: Saw pod success May 9 22:23:11.543: INFO: Pod "pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3" satisfied condition "success or failure" May 9 22:23:11.547: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3 container env-test: STEP: delete the pod May 9 22:23:11.564: INFO: Waiting for pod pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3 to disappear May 9 22:23:11.570: INFO: Pod pod-configmaps-85a46de1-f185-4c96-a013-1fe8cb7bc3c3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:23:11.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4347" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:23:11.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 9 22:23:12.092: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 9 22:23:14.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 22:23:16.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724659792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 9 22:23:19.150: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 9 22:23:23.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5283 to-be-attached-pod -i -c=container1' May 9 22:23:23.376: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:23:23.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5283" for this suite. STEP: Destroying namespace "webhook-5283-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.891 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":273,"skipped":4478,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:23:23.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:23:28.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8673" for this suite. • [SLOW TEST:5.646 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":274,"skipped":4480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:23:29.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:23:29.427: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 9 22:23:29.444: INFO: Number of nodes with available pods: 0 May 9 22:23:29.444: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 9 22:23:29.505: INFO: Number of nodes with available pods: 0 May 9 22:23:29.505: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:30.510: INFO: Number of nodes with available pods: 0 May 9 22:23:30.510: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:31.511: INFO: Number of nodes with available pods: 0 May 9 22:23:31.511: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:32.510: INFO: Number of nodes with available pods: 1 May 9 22:23:32.510: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 9 22:23:32.589: INFO: Number of nodes with available pods: 1 May 9 22:23:32.589: INFO: Number of running nodes: 0, number of available pods: 1 May 9 22:23:33.594: INFO: Number of nodes with available pods: 0 May 9 22:23:33.594: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 9 22:23:33.603: INFO: Number of nodes with available pods: 0 May 9 22:23:33.603: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:34.615: INFO: Number of nodes with available pods: 0 May 9 22:23:34.615: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:35.607: INFO: Number of nodes with available pods: 0 May 9 22:23:35.607: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:36.607: INFO: Number of nodes with available pods: 0 May 9 22:23:36.607: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:37.625: INFO: Number of nodes with available pods: 0 May 9 22:23:37.625: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:38.607: INFO: Number of nodes with available pods: 0 May 9 22:23:38.607: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:39.649: INFO: Number of nodes with available pods: 0 May 9 22:23:39.649: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:40.607: INFO: Number of nodes with available pods: 0 May 9 22:23:40.607: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:41.607: INFO: Number of nodes with available pods: 0 May 9 22:23:41.607: INFO: Node jerma-worker2 is running more than one daemon pod May 9 22:23:42.608: INFO: Number of nodes with available pods: 1 May 9 22:23:42.608: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6397, will wait for the garbage collector to delete the pods May 9 22:23:42.672: INFO: Deleting DaemonSet.extensions daemon-set took: 6.217747ms May 9 22:23:42.973: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.428108ms May 9 22:23:49.576: INFO: Number of nodes with available pods: 0 May 9 22:23:49.576: INFO: Number of running nodes: 0, number of available pods: 0 May 9 22:23:49.582: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6397/daemonsets","resourceVersion":"14819665"},"items":null} May 9 22:23:49.585: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6397/pods","resourceVersion":"14819665"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:23:49.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6397" for this suite. • [SLOW TEST:20.501 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":275,"skipped":4533,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:23:49.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 9 22:23:54.067: INFO: Pod pod-hostip-8bf3c69f-d73e-4c65-ac39-8a22dbf2be56 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:23:54.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4678" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4541,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:23:54.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0509 22:24:24.684339 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 22:24:24.684: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:24:24.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7461" for this suite. • [SLOW TEST:30.617 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":277,"skipped":4550,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 9 22:24:24.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 9 22:24:24.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7105' May 9 22:24:24.979: INFO: stderr: "" May 9 22:24:24.979: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 9 22:24:24.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7105' May 9 22:24:25.299: INFO: stderr: "" May 9 22:24:25.299: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 9 22:24:26.320: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:24:26.320: INFO: Found 0 / 1 May 9 22:24:27.317: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:24:27.317: INFO: Found 0 / 1 May 9 22:24:28.304: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:24:28.304: INFO: Found 1 / 1 May 9 22:24:28.304: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 9 22:24:28.308: INFO: Selector matched 1 pods for map[app:agnhost] May 9 22:24:28.308: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 9 22:24:28.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-98mqw --namespace=kubectl-7105' May 9 22:24:28.424: INFO: stderr: "" May 9 22:24:28.424: INFO: stdout: "Name: agnhost-master-98mqw\nNamespace: kubectl-7105\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Sat, 09 May 2020 22:24:25 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.167\nIPs:\n IP: 10.244.1.167\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://8a41010e7b45ba6e7918367d9060b9a48e797f93d3e325f9585ddb433e6bb529\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 09 May 2020 22:24:27 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-rq87m (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-rq87m:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-rq87m\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7105/agnhost-master-98mqw to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 9 22:24:28.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7105' May 9 22:24:28.551: INFO: stderr: "" May 9 22:24:28.551: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7105\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-98mqw\n" May 9 22:24:28.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7105' May 9 22:24:28.662: INFO: stderr: "" May 9 22:24:28.662: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7105\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.154.228\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.167:6379\nSession Affinity: None\nEvents: \n" May 9 22:24:28.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 9 22:24:28.804: INFO: stderr: "" May 9 22:24:28.804: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sat, 09 May 2020 22:24:27 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 09 May 2020 22:22:37 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 09 May 2020 22:22:37 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 09 May 2020 22:22:37 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 09 May 2020 22:22:37 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 55d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 55d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 55d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 55d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 55d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 55d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 55d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 55d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 55d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 9 22:24:28.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7105' May 9 22:24:28.927: INFO: stderr: "" May 9 22:24:28.927: INFO: stdout: "Name: kubectl-7105\nLabels: e2e-framework=kubectl\n e2e-run=1731d905-1414-4d35-b0f3-4c9009c2d827\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 9 22:24:28.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7105" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":278,"skipped":4553,"failed":0} SSSSSSSSSSSMay 9 22:24:28.934: INFO: Running AfterSuite actions on all nodes May 9 22:24:28.934: INFO: Running AfterSuite actions on node 1 May 9 22:24:28.935: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4554.032 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS