I0325 23:36:40.115103 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0325 23:36:40.115354 7 e2e.go:124] Starting e2e run "5c7da4cb-aa7e-4d4e-bdbe-efbb09c143c4" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585179399 - Will randomize all specs Will run 275 of 4992 specs Mar 25 23:36:40.171: INFO: >>> kubeConfig: /root/.kube/config Mar 25 23:36:40.175: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 23:36:40.199: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 23:36:40.237: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 23:36:40.237: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 23:36:40.237: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 23:36:40.244: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 23:36:40.244: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 23:36:40.244: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Mar 25 23:36:40.245: INFO: kube-apiserver version: v1.17.0 Mar 25 23:36:40.245: INFO: >>> kubeConfig: /root/.kube/config Mar 25 23:36:40.249: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:36:40.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota Mar 25 23:36:40.302: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:36:56.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1781" for this suite. • [SLOW TEST:16.234 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":1,"skipped":6,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:36:56.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-18707af5-10ce-472a-ad6a-c82e2de0a619 STEP: Creating a pod to test consume secrets Mar 25 23:36:56.582: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5" in namespace "projected-752" to be "Succeeded or Failed" Mar 25 23:36:56.590: INFO: Pod "pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262533ms Mar 25 23:36:58.594: INFO: Pod "pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012365774s Mar 25 23:37:00.598: INFO: Pod "pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015868125s STEP: Saw pod success Mar 25 23:37:00.598: INFO: Pod "pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5" satisfied condition "Succeeded or Failed" Mar 25 23:37:00.600: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5 container projected-secret-volume-test: STEP: delete the pod Mar 25 23:37:00.632: INFO: Waiting for pod pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5 to disappear Mar 25 23:37:00.659: INFO: Pod pod-projected-secrets-fc48df89-275d-4111-9147-0a66e94e6ae5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:37:00.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-752" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":13,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:37:00.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-60da1bbe-6acb-4bb6-9ec4-32cfc7d8e15e STEP: Creating a pod to test consume secrets Mar 25 23:37:00.736: INFO: Waiting up to 5m0s for pod "pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4" in namespace "secrets-412" to be "Succeeded or Failed" Mar 25 23:37:00.757: INFO: Pod "pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.978587ms Mar 25 23:37:02.800: INFO: Pod "pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064597861s Mar 25 23:37:04.804: INFO: Pod "pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068593067s STEP: Saw pod success Mar 25 23:37:04.804: INFO: Pod "pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4" satisfied condition "Succeeded or Failed" Mar 25 23:37:04.807: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4 container secret-volume-test: STEP: delete the pod Mar 25 23:37:04.852: INFO: Waiting for pod pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4 to disappear Mar 25 23:37:04.866: INFO: Pod pod-secrets-8f17cbca-73a6-46f6-bd95-703f6180eed4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:37:04.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-412" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":22,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:37:04.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Mar 25 23:37:09.463: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2013 pod-service-account-ae6d13c9-1e00-4ae0-adc9-2bab8ae0bb25 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 25 23:37:11.868: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2013 pod-service-account-ae6d13c9-1e00-4ae0-adc9-2bab8ae0bb25 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 25 23:37:12.079: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2013 pod-service-account-ae6d13c9-1e00-4ae0-adc9-2bab8ae0bb25 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:37:12.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2013" for this suite. • [SLOW TEST:7.411 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":4,"skipped":24,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:37:12.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 25 23:37:12.350: INFO: Waiting up to 5m0s for pod "pod-bd871697-6f32-4f8f-aa14-8385c28df474" in namespace "emptydir-497" to be "Succeeded or Failed" Mar 25 23:37:12.352: INFO: Pod "pod-bd871697-6f32-4f8f-aa14-8385c28df474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193304ms Mar 25 23:37:14.355: INFO: Pod "pod-bd871697-6f32-4f8f-aa14-8385c28df474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005687317s Mar 25 23:37:16.359: INFO: Pod "pod-bd871697-6f32-4f8f-aa14-8385c28df474": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009019061s STEP: Saw pod success Mar 25 23:37:16.359: INFO: Pod "pod-bd871697-6f32-4f8f-aa14-8385c28df474" satisfied condition "Succeeded or Failed" Mar 25 23:37:16.361: INFO: Trying to get logs from node latest-worker2 pod pod-bd871697-6f32-4f8f-aa14-8385c28df474 container test-container: STEP: delete the pod Mar 25 23:37:16.388: INFO: Waiting for pod pod-bd871697-6f32-4f8f-aa14-8385c28df474 to disappear Mar 25 23:37:16.399: INFO: Pod pod-bd871697-6f32-4f8f-aa14-8385c28df474 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:37:16.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-497" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:37:16.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a8f1f5ef-3f4f-4c37-9669-e7761daba809 STEP: Creating a pod to test consume configMaps Mar 25 23:37:16.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01" in namespace "configmap-4404" to be "Succeeded or Failed" Mar 25 23:37:16.501: INFO: Pod "pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.240029ms Mar 25 23:37:18.505: INFO: Pod "pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014431227s Mar 25 23:37:20.509: INFO: Pod "pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018567952s STEP: Saw pod success Mar 25 23:37:20.509: INFO: Pod "pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01" satisfied condition "Succeeded or Failed" Mar 25 23:37:20.512: INFO: Trying to get logs from node latest-worker pod pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01 container configmap-volume-test: STEP: delete the pod Mar 25 23:37:20.532: INFO: Waiting for pod pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01 to disappear Mar 25 23:37:20.558: INFO: Pod pod-configmaps-39ca2da3-3576-4824-b9f1-609931d4fc01 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:37:20.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4404" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:37:20.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 23:37:23.742: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:37:23.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6028" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":110,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:37:23.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-db2a2a29-bafd-47cd-a174-a22f4a40ba74 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-db2a2a29-bafd-47cd-a174-a22f4a40ba74 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:38:42.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9837" for this suite. • [SLOW TEST:78.538 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:38:42.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 25 23:38:42.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad" in namespace "projected-3117" to be "Succeeded or Failed" Mar 25 23:38:42.402: INFO: Pod "downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.72426ms Mar 25 23:38:44.451: INFO: Pod "downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052105926s Mar 25 23:38:46.457: INFO: Pod "downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057995957s STEP: Saw pod success Mar 25 23:38:46.457: INFO: Pod "downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad" satisfied condition "Succeeded or Failed" Mar 25 23:38:46.460: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad container client-container: STEP: delete the pod Mar 25 23:38:46.505: INFO: Waiting for pod downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad to disappear Mar 25 23:38:46.514: INFO: Pod downwardapi-volume-445d83fc-51f3-463d-9f9c-b774911460ad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:38:46.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3117" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":140,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:38:46.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 25 23:38:51.133: INFO: Successfully updated pod "pod-update-cf646280-9ccb-4b91-a426-339199589998" STEP: verifying the updated pod is in kubernetes Mar 25 23:38:51.153: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:38:51.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3892" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":142,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:38:51.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-854 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-854 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-854 Mar 25 23:38:51.265: INFO: Found 0 stateful pods, waiting for 1 Mar 25 23:39:01.269: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 25 23:39:01.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-854 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 23:39:01.531: INFO: stderr: "I0325 23:39:01.403429 105 log.go:172] (0xc000b62160) (0xc00091e000) Create stream\nI0325 23:39:01.403483 105 log.go:172] (0xc000b62160) (0xc00091e000) Stream added, broadcasting: 1\nI0325 23:39:01.407500 105 log.go:172] (0xc000b62160) Reply frame received for 1\nI0325 23:39:01.407578 105 log.go:172] (0xc000b62160) (0xc000b1c000) Create stream\nI0325 23:39:01.407597 105 log.go:172] (0xc000b62160) (0xc000b1c000) Stream added, broadcasting: 3\nI0325 23:39:01.408913 105 log.go:172] (0xc000b62160) Reply frame received for 3\nI0325 23:39:01.408960 105 log.go:172] (0xc000b62160) (0xc00091e0a0) Create stream\nI0325 23:39:01.408973 105 log.go:172] (0xc000b62160) (0xc00091e0a0) Stream added, broadcasting: 5\nI0325 23:39:01.410404 105 log.go:172] (0xc000b62160) Reply frame received for 5\nI0325 23:39:01.499437 105 log.go:172] (0xc000b62160) Data frame received for 5\nI0325 23:39:01.499468 105 log.go:172] (0xc00091e0a0) (5) Data frame handling\nI0325 23:39:01.499487 105 log.go:172] (0xc00091e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0325 23:39:01.525080 105 log.go:172] (0xc000b62160) Data frame received for 3\nI0325 23:39:01.525299 105 log.go:172] (0xc000b1c000) (3) Data frame handling\nI0325 23:39:01.525353 105 log.go:172] (0xc000b1c000) (3) Data frame sent\nI0325 23:39:01.525371 105 log.go:172] (0xc000b62160) Data frame received for 3\nI0325 23:39:01.525383 105 log.go:172] (0xc000b1c000) (3) Data frame handling\nI0325 23:39:01.525519 105 log.go:172] (0xc000b62160) Data frame received for 5\nI0325 23:39:01.525552 105 log.go:172] (0xc00091e0a0) (5) Data frame handling\nI0325 23:39:01.527347 105 log.go:172] (0xc000b62160) Data frame received for 1\nI0325 23:39:01.527364 105 log.go:172] (0xc00091e000) (1) Data frame handling\nI0325 23:39:01.527415 105 log.go:172] (0xc00091e000) (1) Data frame sent\nI0325 23:39:01.527435 105 log.go:172] (0xc000b62160) (0xc00091e000) Stream removed, broadcasting: 1\nI0325 23:39:01.527506 105 log.go:172] (0xc000b62160) Go away received\nI0325 23:39:01.527753 105 log.go:172] (0xc000b62160) (0xc00091e000) Stream removed, broadcasting: 1\nI0325 23:39:01.527767 105 log.go:172] (0xc000b62160) (0xc000b1c000) Stream removed, broadcasting: 3\nI0325 23:39:01.527774 105 log.go:172] (0xc000b62160) (0xc00091e0a0) Stream removed, broadcasting: 5\n" Mar 25 23:39:01.531: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 23:39:01.531: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 23:39:01.534: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 25 23:39:11.539: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 23:39:11.539: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 23:39:11.568: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:11.568: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:11.569: INFO: Mar 25 23:39:11.569: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 25 23:39:12.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980645615s Mar 25 23:39:13.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975841774s Mar 25 23:39:14.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.971068565s Mar 25 23:39:15.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.965808156s Mar 25 23:39:16.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.961269762s Mar 25 23:39:17.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.955736489s Mar 25 23:39:18.603: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.951792014s Mar 25 23:39:19.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946312476s Mar 25 23:39:20.613: INFO: Verifying statefulset ss doesn't scale past 3 for another 941.171037ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-854 Mar 25 23:39:21.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-854 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 23:39:21.860: INFO: stderr: "I0325 23:39:21.755458 127 log.go:172] (0xc00073c580) (0xc0006d35e0) Create stream\nI0325 23:39:21.755517 127 log.go:172] (0xc00073c580) (0xc0006d35e0) Stream added, broadcasting: 1\nI0325 23:39:21.758811 127 log.go:172] (0xc00073c580) Reply frame received for 1\nI0325 23:39:21.758881 127 log.go:172] (0xc00073c580) (0xc0009bc000) Create stream\nI0325 23:39:21.758905 127 log.go:172] (0xc00073c580) (0xc0009bc000) Stream added, broadcasting: 3\nI0325 23:39:21.760013 127 log.go:172] (0xc00073c580) Reply frame received for 3\nI0325 23:39:21.760043 127 log.go:172] (0xc00073c580) (0xc0009bc0a0) Create stream\nI0325 23:39:21.760051 127 log.go:172] (0xc00073c580) (0xc0009bc0a0) Stream added, broadcasting: 5\nI0325 23:39:21.760833 127 log.go:172] (0xc00073c580) Reply frame received for 5\nI0325 23:39:21.854479 127 log.go:172] (0xc00073c580) Data frame received for 5\nI0325 23:39:21.854533 127 log.go:172] (0xc0009bc0a0) (5) Data frame handling\nI0325 23:39:21.854586 127 log.go:172] (0xc0009bc0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0325 23:39:21.854623 127 log.go:172] (0xc00073c580) Data frame received for 3\nI0325 23:39:21.854667 127 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0325 23:39:21.854692 127 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0325 23:39:21.854735 127 log.go:172] (0xc00073c580) Data frame received for 3\nI0325 23:39:21.854764 127 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0325 23:39:21.854793 127 log.go:172] (0xc00073c580) Data frame received for 5\nI0325 23:39:21.854807 127 log.go:172] (0xc0009bc0a0) (5) Data frame handling\nI0325 23:39:21.856452 127 log.go:172] (0xc00073c580) Data frame received for 1\nI0325 23:39:21.856480 127 log.go:172] (0xc0006d35e0) (1) Data frame handling\nI0325 23:39:21.856491 127 log.go:172] (0xc0006d35e0) (1) Data frame sent\nI0325 23:39:21.856505 127 log.go:172] (0xc00073c580) (0xc0006d35e0) Stream removed, broadcasting: 1\nI0325 23:39:21.856559 127 log.go:172] (0xc00073c580) Go away received\nI0325 23:39:21.856832 127 log.go:172] (0xc00073c580) (0xc0006d35e0) Stream removed, broadcasting: 1\nI0325 23:39:21.856847 127 log.go:172] (0xc00073c580) (0xc0009bc000) Stream removed, broadcasting: 3\nI0325 23:39:21.856856 127 log.go:172] (0xc00073c580) (0xc0009bc0a0) Stream removed, broadcasting: 5\n" Mar 25 23:39:21.861: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 23:39:21.861: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 23:39:21.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-854 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 23:39:22.042: INFO: stderr: "I0325 23:39:21.975654 150 log.go:172] (0xc00081aa50) (0xc00083a140) Create stream\nI0325 23:39:21.975723 150 log.go:172] (0xc00081aa50) (0xc00083a140) Stream added, broadcasting: 1\nI0325 23:39:21.983164 150 log.go:172] (0xc00081aa50) Reply frame received for 1\nI0325 23:39:21.983222 150 log.go:172] (0xc00081aa50) (0xc000766000) Create stream\nI0325 23:39:21.983240 150 log.go:172] (0xc00081aa50) (0xc000766000) Stream added, broadcasting: 3\nI0325 23:39:21.985247 150 log.go:172] (0xc00081aa50) Reply frame received for 3\nI0325 23:39:21.985274 150 log.go:172] (0xc00081aa50) (0xc0006a1220) Create stream\nI0325 23:39:21.985287 150 log.go:172] (0xc00081aa50) (0xc0006a1220) Stream added, broadcasting: 5\nI0325 23:39:21.986050 150 log.go:172] (0xc00081aa50) Reply frame received for 5\nI0325 23:39:22.035202 150 log.go:172] (0xc00081aa50) Data frame received for 3\nI0325 23:39:22.035254 150 log.go:172] (0xc000766000) (3) Data frame handling\nI0325 23:39:22.035280 150 log.go:172] (0xc000766000) (3) Data frame sent\nI0325 23:39:22.035298 150 log.go:172] (0xc00081aa50) Data frame received for 3\nI0325 23:39:22.035317 150 log.go:172] (0xc000766000) (3) Data frame handling\nI0325 23:39:22.035349 150 log.go:172] (0xc00081aa50) Data frame received for 5\nI0325 23:39:22.035377 150 log.go:172] (0xc0006a1220) (5) Data frame handling\nI0325 23:39:22.035398 150 log.go:172] (0xc0006a1220) (5) Data frame sent\nI0325 23:39:22.035420 150 log.go:172] (0xc00081aa50) Data frame received for 5\nI0325 23:39:22.035438 150 log.go:172] (0xc0006a1220) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0325 23:39:22.036725 150 log.go:172] (0xc00081aa50) Data frame received for 1\nI0325 23:39:22.036750 150 log.go:172] (0xc00083a140) (1) Data frame handling\nI0325 23:39:22.036767 150 log.go:172] (0xc00083a140) (1) Data frame sent\nI0325 23:39:22.036783 150 log.go:172] (0xc00081aa50) (0xc00083a140) Stream removed, broadcasting: 1\nI0325 23:39:22.036942 150 log.go:172] (0xc00081aa50) Go away received\nI0325 23:39:22.037357 150 log.go:172] (0xc00081aa50) (0xc00083a140) Stream removed, broadcasting: 1\nI0325 23:39:22.037379 150 log.go:172] (0xc00081aa50) (0xc000766000) Stream removed, broadcasting: 3\nI0325 23:39:22.037391 150 log.go:172] (0xc00081aa50) (0xc0006a1220) Stream removed, broadcasting: 5\n" Mar 25 23:39:22.042: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 23:39:22.042: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 23:39:22.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-854 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 23:39:22.235: INFO: stderr: "I0325 23:39:22.162023 173 log.go:172] (0xc000ae40b0) (0xc000ac81e0) Create stream\nI0325 23:39:22.162086 173 log.go:172] (0xc000ae40b0) (0xc000ac81e0) Stream added, broadcasting: 1\nI0325 23:39:22.166932 173 log.go:172] (0xc000ae40b0) Reply frame received for 1\nI0325 23:39:22.166995 173 log.go:172] (0xc000ae40b0) (0xc00051f860) Create stream\nI0325 23:39:22.167019 173 log.go:172] (0xc000ae40b0) (0xc00051f860) Stream added, broadcasting: 3\nI0325 23:39:22.168003 173 log.go:172] (0xc000ae40b0) Reply frame received for 3\nI0325 23:39:22.168040 173 log.go:172] (0xc000ae40b0) (0xc000354d20) Create stream\nI0325 23:39:22.168053 173 log.go:172] (0xc000ae40b0) (0xc000354d20) Stream added, broadcasting: 5\nI0325 23:39:22.168932 173 log.go:172] (0xc000ae40b0) Reply frame received for 5\nI0325 23:39:22.229034 173 log.go:172] (0xc000ae40b0) Data frame received for 3\nI0325 23:39:22.229068 173 log.go:172] (0xc00051f860) (3) Data frame handling\nI0325 23:39:22.229095 173 log.go:172] (0xc00051f860) (3) Data frame sent\nI0325 23:39:22.229270 173 log.go:172] (0xc000ae40b0) Data frame received for 3\nI0325 23:39:22.229299 173 log.go:172] (0xc00051f860) (3) Data frame handling\nI0325 23:39:22.229332 173 log.go:172] (0xc000ae40b0) Data frame received for 5\nI0325 23:39:22.229356 173 log.go:172] (0xc000354d20) (5) Data frame handling\nI0325 23:39:22.229384 173 log.go:172] (0xc000354d20) (5) Data frame sent\nI0325 23:39:22.229406 173 log.go:172] (0xc000ae40b0) Data frame received for 5\nI0325 23:39:22.229425 173 log.go:172] (0xc000354d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0325 23:39:22.230911 173 log.go:172] (0xc000ae40b0) Data frame received for 1\nI0325 23:39:22.230931 173 log.go:172] (0xc000ac81e0) (1) Data frame handling\nI0325 23:39:22.230945 173 log.go:172] (0xc000ac81e0) (1) Data frame sent\nI0325 23:39:22.230961 173 log.go:172] (0xc000ae40b0) (0xc000ac81e0) Stream removed, broadcasting: 1\nI0325 23:39:22.231210 173 log.go:172] (0xc000ae40b0) Go away received\nI0325 23:39:22.231316 173 log.go:172] (0xc000ae40b0) (0xc000ac81e0) Stream removed, broadcasting: 1\nI0325 23:39:22.231335 173 log.go:172] (0xc000ae40b0) (0xc00051f860) Stream removed, broadcasting: 3\nI0325 23:39:22.231347 173 log.go:172] (0xc000ae40b0) (0xc000354d20) Stream removed, broadcasting: 5\n" Mar 25 23:39:22.235: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 23:39:22.235: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 23:39:22.240: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 25 23:39:32.244: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 23:39:32.244: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 23:39:32.244: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 25 23:39:32.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-854 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 23:39:32.499: INFO: stderr: "I0325 23:39:32.410558 195 log.go:172] (0xc000a0cc60) (0xc0008f5540) Create stream\nI0325 23:39:32.410615 195 log.go:172] (0xc000a0cc60) (0xc0008f5540) Stream added, broadcasting: 1\nI0325 23:39:32.413590 195 log.go:172] (0xc000a0cc60) Reply frame received for 1\nI0325 23:39:32.413638 195 log.go:172] (0xc000a0cc60) (0xc000591f40) Create stream\nI0325 23:39:32.413653 195 log.go:172] (0xc000a0cc60) (0xc000591f40) Stream added, broadcasting: 3\nI0325 23:39:32.414649 195 log.go:172] (0xc000a0cc60) Reply frame received for 3\nI0325 23:39:32.414700 195 log.go:172] (0xc000a0cc60) (0xc00030c460) Create stream\nI0325 23:39:32.414728 195 log.go:172] (0xc000a0cc60) (0xc00030c460) Stream added, broadcasting: 5\nI0325 23:39:32.416086 195 log.go:172] (0xc000a0cc60) Reply frame received for 5\nI0325 23:39:32.493018 195 log.go:172] (0xc000a0cc60) Data frame received for 3\nI0325 23:39:32.493072 195 log.go:172] (0xc000591f40) (3) Data frame handling\nI0325 23:39:32.493104 195 log.go:172] (0xc000591f40) (3) Data frame sent\nI0325 23:39:32.493236 195 log.go:172] (0xc000a0cc60) Data frame received for 3\nI0325 23:39:32.493250 195 log.go:172] (0xc000591f40) (3) Data frame handling\nI0325 23:39:32.493363 195 log.go:172] (0xc000a0cc60) Data frame received for 5\nI0325 23:39:32.493445 195 log.go:172] (0xc00030c460) (5) Data frame handling\nI0325 23:39:32.493488 195 log.go:172] (0xc00030c460) (5) Data frame sent\nI0325 23:39:32.493521 195 log.go:172] (0xc000a0cc60) Data frame received for 5\nI0325 23:39:32.493542 195 log.go:172] (0xc00030c460) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0325 23:39:32.494795 195 log.go:172] (0xc000a0cc60) Data frame received for 1\nI0325 23:39:32.494826 195 log.go:172] (0xc0008f5540) (1) Data frame handling\nI0325 23:39:32.494871 195 log.go:172] (0xc0008f5540) (1) Data frame sent\nI0325 23:39:32.494903 195 log.go:172] (0xc000a0cc60) (0xc0008f5540) Stream removed, broadcasting: 1\nI0325 23:39:32.494925 195 log.go:172] (0xc000a0cc60) Go away received\nI0325 23:39:32.495412 195 log.go:172] (0xc000a0cc60) (0xc0008f5540) Stream removed, broadcasting: 1\nI0325 23:39:32.495439 195 log.go:172] (0xc000a0cc60) (0xc000591f40) Stream removed, broadcasting: 3\nI0325 23:39:32.495451 195 log.go:172] (0xc000a0cc60) (0xc00030c460) Stream removed, broadcasting: 5\n" Mar 25 23:39:32.499: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 23:39:32.499: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 23:39:32.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-854 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 23:39:32.751: INFO: stderr: "I0325 23:39:32.641551 219 log.go:172] (0xc000aa9340) (0xc000b42500) Create stream\nI0325 23:39:32.641617 219 log.go:172] (0xc000aa9340) (0xc000b42500) Stream added, broadcasting: 1\nI0325 23:39:32.646347 219 log.go:172] (0xc000aa9340) Reply frame received for 1\nI0325 23:39:32.646385 219 log.go:172] (0xc000aa9340) (0xc0006cf680) Create stream\nI0325 23:39:32.646397 219 log.go:172] (0xc000aa9340) (0xc0006cf680) Stream added, broadcasting: 3\nI0325 23:39:32.647344 219 log.go:172] (0xc000aa9340) Reply frame received for 3\nI0325 23:39:32.647394 219 log.go:172] (0xc000aa9340) (0xc000514aa0) Create stream\nI0325 23:39:32.647414 219 log.go:172] (0xc000aa9340) (0xc000514aa0) Stream added, broadcasting: 5\nI0325 23:39:32.648370 219 log.go:172] (0xc000aa9340) Reply frame received for 5\nI0325 23:39:32.716319 219 log.go:172] (0xc000aa9340) Data frame received for 5\nI0325 23:39:32.716349 219 log.go:172] (0xc000514aa0) (5) Data frame handling\nI0325 23:39:32.716369 219 log.go:172] (0xc000514aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0325 23:39:32.744341 219 log.go:172] (0xc000aa9340) Data frame received for 3\nI0325 23:39:32.744359 219 log.go:172] (0xc0006cf680) (3) Data frame handling\nI0325 23:39:32.744366 219 log.go:172] (0xc0006cf680) (3) Data frame sent\nI0325 23:39:32.744384 219 log.go:172] (0xc000aa9340) Data frame received for 5\nI0325 23:39:32.744423 219 log.go:172] (0xc000514aa0) (5) Data frame handling\nI0325 23:39:32.744870 219 log.go:172] (0xc000aa9340) Data frame received for 3\nI0325 23:39:32.744892 219 log.go:172] (0xc0006cf680) (3) Data frame handling\nI0325 23:39:32.747370 219 log.go:172] (0xc000aa9340) Data frame received for 1\nI0325 23:39:32.747390 219 log.go:172] (0xc000b42500) (1) Data frame handling\nI0325 23:39:32.747405 219 log.go:172] (0xc000b42500) (1) Data frame sent\nI0325 23:39:32.747424 219 log.go:172] (0xc000aa9340) (0xc000b42500) Stream removed, broadcasting: 1\nI0325 23:39:32.747528 219 log.go:172] (0xc000aa9340) Go away received\nI0325 23:39:32.747729 219 log.go:172] (0xc000aa9340) (0xc000b42500) Stream removed, broadcasting: 1\nI0325 23:39:32.747748 219 log.go:172] (0xc000aa9340) (0xc0006cf680) Stream removed, broadcasting: 3\nI0325 23:39:32.747759 219 log.go:172] (0xc000aa9340) (0xc000514aa0) Stream removed, broadcasting: 5\n" Mar 25 23:39:32.751: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 23:39:32.751: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 23:39:32.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-854 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 23:39:32.990: INFO: stderr: "I0325 23:39:32.892808 239 log.go:172] (0xc0009048f0) (0xc0007d1680) Create stream\nI0325 23:39:32.892878 239 log.go:172] (0xc0009048f0) (0xc0007d1680) Stream added, broadcasting: 1\nI0325 23:39:32.895431 239 log.go:172] (0xc0009048f0) Reply frame received for 1\nI0325 23:39:32.895471 239 log.go:172] (0xc0009048f0) (0xc0007d1720) Create stream\nI0325 23:39:32.895484 239 log.go:172] (0xc0009048f0) (0xc0007d1720) Stream added, broadcasting: 3\nI0325 23:39:32.896343 239 log.go:172] (0xc0009048f0) Reply frame received for 3\nI0325 23:39:32.896389 239 log.go:172] (0xc0009048f0) (0xc000a5c000) Create stream\nI0325 23:39:32.896406 239 log.go:172] (0xc0009048f0) (0xc000a5c000) Stream added, broadcasting: 5\nI0325 23:39:32.897566 239 log.go:172] (0xc0009048f0) Reply frame received for 5\nI0325 23:39:32.957372 239 log.go:172] (0xc0009048f0) Data frame received for 5\nI0325 23:39:32.957401 239 log.go:172] (0xc000a5c000) (5) Data frame handling\nI0325 23:39:32.957421 239 log.go:172] (0xc000a5c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0325 23:39:32.984291 239 log.go:172] (0xc0009048f0) Data frame received for 3\nI0325 23:39:32.984314 239 log.go:172] (0xc0007d1720) (3) Data frame handling\nI0325 23:39:32.984348 239 log.go:172] (0xc0007d1720) (3) Data frame sent\nI0325 23:39:32.984658 239 log.go:172] (0xc0009048f0) Data frame received for 3\nI0325 23:39:32.984671 239 log.go:172] (0xc0007d1720) (3) Data frame handling\nI0325 23:39:32.985100 239 log.go:172] (0xc0009048f0) Data frame received for 5\nI0325 23:39:32.985215 239 log.go:172] (0xc000a5c000) (5) Data frame handling\nI0325 23:39:32.987019 239 log.go:172] (0xc0009048f0) Data frame received for 1\nI0325 23:39:32.987034 239 log.go:172] (0xc0007d1680) (1) Data frame handling\nI0325 23:39:32.987052 239 log.go:172] (0xc0007d1680) (1) Data frame sent\nI0325 23:39:32.987067 239 log.go:172] (0xc0009048f0) (0xc0007d1680) Stream removed, broadcasting: 1\nI0325 23:39:32.987127 239 log.go:172] (0xc0009048f0) Go away received\nI0325 23:39:32.987306 239 log.go:172] (0xc0009048f0) (0xc0007d1680) Stream removed, broadcasting: 1\nI0325 23:39:32.987325 239 log.go:172] (0xc0009048f0) (0xc0007d1720) Stream removed, broadcasting: 3\nI0325 23:39:32.987336 239 log.go:172] (0xc0009048f0) (0xc000a5c000) Stream removed, broadcasting: 5\n" Mar 25 23:39:32.990: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 23:39:32.990: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 23:39:32.990: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 23:39:33.033: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 25 23:39:43.040: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 23:39:43.040: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 25 23:39:43.040: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 25 23:39:43.055: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:43.055: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:43.055: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC }] Mar 25 23:39:43.055: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC }] Mar 25 23:39:43.055: INFO: Mar 25 23:39:43.055: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 23:39:44.059: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:44.059: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:44.060: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC }] Mar 25 23:39:44.060: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC }] Mar 25 23:39:44.060: INFO: Mar 25 23:39:44.060: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 23:39:45.064: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:45.065: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:45.065: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC }] Mar 25 23:39:45.065: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:11 +0000 UTC }] Mar 25 23:39:45.065: INFO: Mar 25 23:39:45.065: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 23:39:46.069: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:46.069: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:46.069: INFO: Mar 25 23:39:46.069: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 25 23:39:47.073: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:47.073: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:47.073: INFO: Mar 25 23:39:47.073: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 25 23:39:48.078: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:48.078: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:48.078: INFO: Mar 25 23:39:48.078: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 25 23:39:49.082: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:49.082: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:49.082: INFO: Mar 25 23:39:49.082: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 25 23:39:50.087: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:50.087: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:50.087: INFO: Mar 25 23:39:50.087: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 25 23:39:51.095: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:51.095: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:51.095: INFO: Mar 25 23:39:51.095: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 25 23:39:52.100: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 23:39:52.100: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:39:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 23:38:51 +0000 UTC }] Mar 25 23:39:52.100: INFO: Mar 25 23:39:52.100: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-854 Mar 25 23:39:53.104: INFO: Scaling statefulset ss to 0 Mar 25 23:39:53.114: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 25 23:39:53.116: INFO: Deleting all statefulset in ns statefulset-854 Mar 25 23:39:53.119: INFO: Scaling statefulset ss to 0 Mar 25 23:39:53.128: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 23:39:53.130: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:39:53.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-854" for this suite. • [SLOW TEST:61.993 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":11,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:39:53.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 23:39:54.112: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 23:39:56.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776394, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776394, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776394, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776394, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 23:39:59.159: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:39:59.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3714" for this suite. STEP: Destroying namespace "webhook-3714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.528 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":12,"skipped":165,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:39:59.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 23:40:00.440: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 23:40:02.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776400, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776400, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776400, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776400, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 23:40:05.483: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:40:05.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7609" for this suite. STEP: Destroying namespace "webhook-7609-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":13,"skipped":175,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:40:05.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:40:05.832: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 25 23:40:05.857: INFO: Number of nodes with available pods: 0 Mar 25 23:40:05.857: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 25 23:40:06.154: INFO: Number of nodes with available pods: 0 Mar 25 23:40:06.154: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:07.159: INFO: Number of nodes with available pods: 0 Mar 25 23:40:07.159: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:08.158: INFO: Number of nodes with available pods: 0 Mar 25 23:40:08.158: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:09.158: INFO: Number of nodes with available pods: 1 Mar 25 23:40:09.158: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 25 23:40:09.189: INFO: Number of nodes with available pods: 1 Mar 25 23:40:09.189: INFO: Number of running nodes: 0, number of available pods: 1 Mar 25 23:40:10.193: INFO: Number of nodes with available pods: 0 Mar 25 23:40:10.193: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 25 23:40:10.205: INFO: Number of nodes with available pods: 0 Mar 25 23:40:10.205: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:11.267: INFO: Number of nodes with available pods: 0 Mar 25 23:40:11.267: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:12.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:12.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:13.210: INFO: Number of nodes with available pods: 0 Mar 25 23:40:13.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:14.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:14.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:15.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:15.209: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:16.210: INFO: Number of nodes with available pods: 0 Mar 25 23:40:16.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:17.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:17.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:18.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:18.209: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:19.243: INFO: Number of nodes with available pods: 0 Mar 25 23:40:19.243: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:20.210: INFO: Number of nodes with available pods: 0 Mar 25 23:40:20.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:21.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:21.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:22.210: INFO: Number of nodes with available pods: 0 Mar 25 23:40:22.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:23.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:23.209: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:24.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:24.210: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:25.209: INFO: Number of nodes with available pods: 0 Mar 25 23:40:25.209: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 23:40:26.210: INFO: Number of nodes with available pods: 1 Mar 25 23:40:26.210: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3632, will wait for the garbage collector to delete the pods Mar 25 23:40:26.275: INFO: Deleting DaemonSet.extensions daemon-set took: 6.62297ms Mar 25 23:40:26.576: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.261112ms Mar 25 23:40:29.779: INFO: Number of nodes with available pods: 0 Mar 25 23:40:29.779: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 23:40:29.786: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3632/daemonsets","resourceVersion":"2798840"},"items":null} Mar 25 23:40:29.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3632/pods","resourceVersion":"2798840"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:40:29.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3632" for this suite. • [SLOW TEST:24.082 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":14,"skipped":177,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:40:29.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-e30acc87-afda-4061-8fc9-976e01e7b308 in namespace container-probe-8943 Mar 25 23:40:33.912: INFO: Started pod liveness-e30acc87-afda-4061-8fc9-976e01e7b308 in namespace container-probe-8943 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 23:40:33.916: INFO: Initial restart count of pod liveness-e30acc87-afda-4061-8fc9-976e01e7b308 is 0 Mar 25 23:40:51.959: INFO: Restart count of pod container-probe-8943/liveness-e30acc87-afda-4061-8fc9-976e01e7b308 is now 1 (18.042820234s elapsed) Mar 25 23:41:11.999: INFO: Restart count of pod container-probe-8943/liveness-e30acc87-afda-4061-8fc9-976e01e7b308 is now 2 (38.083256902s elapsed) Mar 25 23:41:32.041: INFO: Restart count of pod container-probe-8943/liveness-e30acc87-afda-4061-8fc9-976e01e7b308 is now 3 (58.124754164s elapsed) Mar 25 23:41:52.082: INFO: Restart count of pod container-probe-8943/liveness-e30acc87-afda-4061-8fc9-976e01e7b308 is now 4 (1m18.166025567s elapsed) Mar 25 23:43:00.226: INFO: Restart count of pod container-probe-8943/liveness-e30acc87-afda-4061-8fc9-976e01e7b308 is now 5 (2m26.309819813s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:00.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8943" for this suite. • [SLOW TEST:150.457 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:00.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 25 23:43:00.482: INFO: Waiting up to 5m0s for pod "pod-1010081b-169e-4afd-a4ac-55ec02261397" in namespace "emptydir-5306" to be "Succeeded or Failed" Mar 25 23:43:00.519: INFO: Pod "pod-1010081b-169e-4afd-a4ac-55ec02261397": Phase="Pending", Reason="", readiness=false. Elapsed: 37.702668ms Mar 25 23:43:02.551: INFO: Pod "pod-1010081b-169e-4afd-a4ac-55ec02261397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068903525s Mar 25 23:43:04.555: INFO: Pod "pod-1010081b-169e-4afd-a4ac-55ec02261397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073188163s STEP: Saw pod success Mar 25 23:43:04.555: INFO: Pod "pod-1010081b-169e-4afd-a4ac-55ec02261397" satisfied condition "Succeeded or Failed" Mar 25 23:43:04.558: INFO: Trying to get logs from node latest-worker pod pod-1010081b-169e-4afd-a4ac-55ec02261397 container test-container: STEP: delete the pod Mar 25 23:43:04.607: INFO: Waiting for pod pod-1010081b-169e-4afd-a4ac-55ec02261397 to disappear Mar 25 23:43:04.623: INFO: Pod pod-1010081b-169e-4afd-a4ac-55ec02261397 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:04.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5306" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":222,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:04.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:43:04.690: INFO: Creating deployment "test-recreate-deployment" Mar 25 23:43:04.701: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 25 23:43:04.738: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 25 23:43:06.743: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 25 23:43:06.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776584, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776584, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776584, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776584, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:43:08.750: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 25 23:43:08.757: INFO: Updating deployment test-recreate-deployment Mar 25 23:43:08.757: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 25 23:43:09.214: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4133 /apis/apps/v1/namespaces/deployment-4133/deployments/test-recreate-deployment e48bb628-186e-4c69-b8ad-702472d58c8c 2799428 2 2020-03-25 23:43:04 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022287d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-25 23:43:08 +0000 UTC,LastTransitionTime:2020-03-25 23:43:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-25 23:43:08 +0000 UTC,LastTransitionTime:2020-03-25 23:43:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 25 23:43:09.259: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4133 /apis/apps/v1/namespaces/deployment-4133/replicasets/test-recreate-deployment-5f94c574ff 3b2ac16a-197f-4071-887e-e030c0843b91 2799425 1 2020-03-25 23:43:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment e48bb628-186e-4c69-b8ad-702472d58c8c 0xc002228be7 0xc002228be8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002228c48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 23:43:09.259: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 25 23:43:09.260: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-4133 /apis/apps/v1/namespaces/deployment-4133/replicasets/test-recreate-deployment-846c7dd955 ef9e06c0-8ffb-47d6-a079-f25d7c7b0c2b 2799417 2 2020-03-25 23:43:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment e48bb628-186e-4c69-b8ad-702472d58c8c 0xc002228cb7 0xc002228cb8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002228d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 23:43:09.274: INFO: Pod "test-recreate-deployment-5f94c574ff-4v9g9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-4v9g9 test-recreate-deployment-5f94c574ff- deployment-4133 /api/v1/namespaces/deployment-4133/pods/test-recreate-deployment-5f94c574ff-4v9g9 e4d98369-c518-4e2e-9319-943c72c8086a 2799429 0 2020-03-25 23:43:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 3b2ac16a-197f-4071-887e-e030c0843b91 0xc0022291f7 0xc0022291f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m87ch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m87ch,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m87ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-25 23:43:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:09.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4133" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":17,"skipped":235,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:09.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Mar 25 23:43:09.365: INFO: Waiting up to 5m0s for pod "client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d" in namespace "containers-6197" to be "Succeeded or Failed" Mar 25 23:43:09.369: INFO: Pod "client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.574695ms Mar 25 23:43:11.449: INFO: Pod "client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083723283s Mar 25 23:43:13.454: INFO: Pod "client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088202364s STEP: Saw pod success Mar 25 23:43:13.454: INFO: Pod "client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d" satisfied condition "Succeeded or Failed" Mar 25 23:43:13.457: INFO: Trying to get logs from node latest-worker pod client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d container test-container: STEP: delete the pod Mar 25 23:43:13.472: INFO: Waiting for pod client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d to disappear Mar 25 23:43:13.483: INFO: Pod client-containers-5a788dc7-7ccf-42c0-9d7b-512fa31e394d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:13.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6197" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":242,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:13.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:27.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8365" for this suite. • [SLOW TEST:14.073 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":19,"skipped":248,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:27.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:43:27.624: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 25 23:43:27.633: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 25 23:43:32.677: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 23:43:32.677: INFO: Creating deployment "test-rolling-update-deployment" Mar 25 23:43:32.681: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 25 23:43:32.704: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 25 23:43:34.711: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 25 23:43:34.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776612, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776612, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776612, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776612, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:43:36.718: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 25 23:43:36.728: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6640 /apis/apps/v1/namespaces/deployment-6640/deployments/test-rolling-update-deployment 6a11a305-9052-47b1-a72f-aac58f798586 2799718 1 2020-03-25 23:43:32 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ebc838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-25 23:43:32 +0000 UTC,LastTransitionTime:2020-03-25 23:43:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-25 23:43:35 +0000 UTC,LastTransitionTime:2020-03-25 23:43:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 25 23:43:36.731: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-6640 /apis/apps/v1/namespaces/deployment-6640/replicasets/test-rolling-update-deployment-664dd8fc7f 64e9fcb7-6a0b-4dfb-83f9-5ebaa9941a1f 2799707 1 2020-03-25 23:43:32 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 6a11a305-9052-47b1-a72f-aac58f798586 0xc000ebcec7 0xc000ebcec8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ebcf38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 23:43:36.731: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 25 23:43:36.731: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6640 /apis/apps/v1/namespaces/deployment-6640/replicasets/test-rolling-update-controller 72348c3c-6b8d-4903-b352-558ecfbeac94 2799716 2 2020-03-25 23:43:27 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 6a11a305-9052-47b1-a72f-aac58f798586 0xc000ebcdcf 0xc000ebcde0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000ebce48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 23:43:36.735: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-zn4bt" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-zn4bt test-rolling-update-deployment-664dd8fc7f- deployment-6640 /api/v1/namespaces/deployment-6640/pods/test-rolling-update-deployment-664dd8fc7f-zn4bt 15be3e3e-28ec-498a-9827-4f23fcd3f836 2799706 0 2020-03-25 23:43:32 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 64e9fcb7-6a0b-4dfb-83f9-5ebaa9941a1f 0xc000ebd8b7 0xc000ebd8b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42f5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42f5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42f5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:43:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.197,StartTime:2020-03-25 23:43:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-25 23:43:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://0d2a02bf9d562d036ed5a8ccaf00a686baf0654a8218917fc06227051020f07c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:36.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6640" for this suite. • [SLOW TEST:9.179 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":20,"skipped":253,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:36.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-b2814f4d-0a4d-49de-b8ed-65a397335442 STEP: Creating a pod to test consume configMaps Mar 25 23:43:36.846: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667" in namespace "configmap-5565" to be "Succeeded or Failed" Mar 25 23:43:36.852: INFO: Pod "pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667": Phase="Pending", Reason="", readiness=false. Elapsed: 5.483323ms Mar 25 23:43:38.855: INFO: Pod "pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00858603s Mar 25 23:43:40.872: INFO: Pod "pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025989513s STEP: Saw pod success Mar 25 23:43:40.873: INFO: Pod "pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667" satisfied condition "Succeeded or Failed" Mar 25 23:43:40.877: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667 container configmap-volume-test: STEP: delete the pod Mar 25 23:43:40.930: INFO: Waiting for pod pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667 to disappear Mar 25 23:43:40.933: INFO: Pod pod-configmaps-d3b1da7c-4dbd-417f-adff-9fbd555a2667 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:40.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5565" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:40.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Mar 25 23:43:40.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Mar 25 23:43:41.069: INFO: stderr: "" Mar 25 23:43:41.069: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:43:41.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6421" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":22,"skipped":321,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:43:41.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 25 23:43:41.115: INFO: PodSpec: initContainers in spec.initContainers Mar 25 23:44:30.080: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-61e052aa-1524-4295-b509-6079b4a78f5d", GenerateName:"", Namespace:"init-container-3484", SelfLink:"/api/v1/namespaces/init-container-3484/pods/pod-init-61e052aa-1524-4295-b509-6079b4a78f5d", UID:"11d7860a-a8c6-4c6d-8efe-7e85583ac19f", ResourceVersion:"2799970", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720776621, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"115585793"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-482b2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000ed0000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-482b2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-482b2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-482b2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ebc0c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0006a0070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ebc1e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ebc200)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ebc208), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ebc20c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776621, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776621, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776621, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776621, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.56", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.56"}}, StartTime:(*v1.Time)(0xc002ca8060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002ca80a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0006a01c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://963abde138d85a9117b149dc5aebe7baf49b18731f43fe7d3a27541051a496a6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ca80c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ca8080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc000ebc28f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:44:30.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3484" for this suite. • [SLOW TEST:49.017 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":23,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:44:30.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:44:47.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2409" for this suite. • [SLOW TEST:17.144 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":24,"skipped":380,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:44:47.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-7cdda58a-3459-4bd2-82af-4bf2a4a958de STEP: Creating a pod to test consume secrets Mar 25 23:44:47.345: INFO: Waiting up to 5m0s for pod "pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d" in namespace "secrets-8189" to be "Succeeded or Failed" Mar 25 23:44:47.351: INFO: Pod "pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.441094ms Mar 25 23:44:49.354: INFO: Pod "pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008545901s Mar 25 23:44:51.357: INFO: Pod "pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012075931s STEP: Saw pod success Mar 25 23:44:51.357: INFO: Pod "pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d" satisfied condition "Succeeded or Failed" Mar 25 23:44:51.360: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d container secret-volume-test: STEP: delete the pod Mar 25 23:44:51.443: INFO: Waiting for pod pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d to disappear Mar 25 23:44:51.446: INFO: Pod pod-secrets-64da6ad6-ef67-4be7-8f61-a2083bb3390d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:44:51.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8189" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:44:51.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9251 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 23:44:51.526: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 23:44:51.585: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:44:53.655: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:44:55.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:44:57.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:44:59.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:45:01.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:45:03.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:45:05.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:45:07.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:45:09.589: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 23:45:09.596: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 23:45:13.656: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.57:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9251 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:45:13.656: INFO: >>> kubeConfig: /root/.kube/config I0325 23:45:13.691922 7 log.go:172] (0xc002a8c630) (0xc002ba2780) Create stream I0325 23:45:13.691954 7 log.go:172] (0xc002a8c630) (0xc002ba2780) Stream added, broadcasting: 1 I0325 23:45:13.694529 7 log.go:172] (0xc002a8c630) Reply frame received for 1 I0325 23:45:13.694576 7 log.go:172] (0xc002a8c630) (0xc00117c000) Create stream I0325 23:45:13.694593 7 log.go:172] (0xc002a8c630) (0xc00117c000) Stream added, broadcasting: 3 I0325 23:45:13.695623 7 log.go:172] (0xc002a8c630) Reply frame received for 3 I0325 23:45:13.695661 7 log.go:172] (0xc002a8c630) (0xc001fc00a0) Create stream I0325 23:45:13.695670 7 log.go:172] (0xc002a8c630) (0xc001fc00a0) Stream added, broadcasting: 5 I0325 23:45:13.696736 7 log.go:172] (0xc002a8c630) Reply frame received for 5 I0325 23:45:13.786385 7 log.go:172] (0xc002a8c630) Data frame received for 3 I0325 23:45:13.786431 7 log.go:172] (0xc00117c000) (3) Data frame handling I0325 23:45:13.786460 7 log.go:172] (0xc00117c000) (3) Data frame sent I0325 23:45:13.786602 7 log.go:172] (0xc002a8c630) Data frame received for 3 I0325 23:45:13.786631 7 log.go:172] (0xc00117c000) (3) Data frame handling I0325 23:45:13.786686 7 log.go:172] (0xc002a8c630) Data frame received for 5 I0325 23:45:13.786716 7 log.go:172] (0xc001fc00a0) (5) Data frame handling I0325 23:45:13.788497 7 log.go:172] (0xc002a8c630) Data frame received for 1 I0325 23:45:13.788514 7 log.go:172] (0xc002ba2780) (1) Data frame handling I0325 23:45:13.788524 7 log.go:172] (0xc002ba2780) (1) Data frame sent I0325 23:45:13.788749 7 log.go:172] (0xc002a8c630) (0xc002ba2780) Stream removed, broadcasting: 1 I0325 23:45:13.788787 7 log.go:172] (0xc002a8c630) Go away received I0325 23:45:13.789046 7 log.go:172] (0xc002a8c630) (0xc002ba2780) Stream removed, broadcasting: 1 I0325 23:45:13.789067 7 log.go:172] (0xc002a8c630) (0xc00117c000) Stream removed, broadcasting: 3 I0325 23:45:13.789081 7 log.go:172] (0xc002a8c630) (0xc001fc00a0) Stream removed, broadcasting: 5 Mar 25 23:45:13.789: INFO: Found all expected endpoints: [netserver-0] Mar 25 23:45:13.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.200:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9251 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:45:13.798: INFO: >>> kubeConfig: /root/.kube/config I0325 23:45:13.830080 7 log.go:172] (0xc002a8cc60) (0xc002ba2f00) Create stream I0325 23:45:13.830106 7 log.go:172] (0xc002a8cc60) (0xc002ba2f00) Stream added, broadcasting: 1 I0325 23:45:13.832334 7 log.go:172] (0xc002a8cc60) Reply frame received for 1 I0325 23:45:13.832374 7 log.go:172] (0xc002a8cc60) (0xc00117c320) Create stream I0325 23:45:13.832386 7 log.go:172] (0xc002a8cc60) (0xc00117c320) Stream added, broadcasting: 3 I0325 23:45:13.833371 7 log.go:172] (0xc002a8cc60) Reply frame received for 3 I0325 23:45:13.833409 7 log.go:172] (0xc002a8cc60) (0xc001fc01e0) Create stream I0325 23:45:13.833422 7 log.go:172] (0xc002a8cc60) (0xc001fc01e0) Stream added, broadcasting: 5 I0325 23:45:13.834139 7 log.go:172] (0xc002a8cc60) Reply frame received for 5 I0325 23:45:13.897904 7 log.go:172] (0xc002a8cc60) Data frame received for 3 I0325 23:45:13.897936 7 log.go:172] (0xc00117c320) (3) Data frame handling I0325 23:45:13.897949 7 log.go:172] (0xc00117c320) (3) Data frame sent I0325 23:45:13.897957 7 log.go:172] (0xc002a8cc60) Data frame received for 3 I0325 23:45:13.897967 7 log.go:172] (0xc00117c320) (3) Data frame handling I0325 23:45:13.898137 7 log.go:172] (0xc002a8cc60) Data frame received for 5 I0325 23:45:13.898177 7 log.go:172] (0xc001fc01e0) (5) Data frame handling I0325 23:45:13.899618 7 log.go:172] (0xc002a8cc60) Data frame received for 1 I0325 23:45:13.899650 7 log.go:172] (0xc002ba2f00) (1) Data frame handling I0325 23:45:13.899672 7 log.go:172] (0xc002ba2f00) (1) Data frame sent I0325 23:45:13.899695 7 log.go:172] (0xc002a8cc60) (0xc002ba2f00) Stream removed, broadcasting: 1 I0325 23:45:13.899718 7 log.go:172] (0xc002a8cc60) Go away received I0325 23:45:13.899904 7 log.go:172] (0xc002a8cc60) (0xc002ba2f00) Stream removed, broadcasting: 1 I0325 23:45:13.899936 7 log.go:172] (0xc002a8cc60) (0xc00117c320) Stream removed, broadcasting: 3 I0325 23:45:13.899956 7 log.go:172] (0xc002a8cc60) (0xc001fc01e0) Stream removed, broadcasting: 5 Mar 25 23:45:13.899: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:45:13.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9251" for this suite. • [SLOW TEST:22.464 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":413,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:45:13.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 25 23:45:13.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a" in namespace "projected-2380" to be "Succeeded or Failed" Mar 25 23:45:13.986: INFO: Pod "downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167955ms Mar 25 23:45:16.002: INFO: Pod "downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019695733s Mar 25 23:45:18.006: INFO: Pod "downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023629305s STEP: Saw pod success Mar 25 23:45:18.006: INFO: Pod "downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a" satisfied condition "Succeeded or Failed" Mar 25 23:45:18.009: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a container client-container: STEP: delete the pod Mar 25 23:45:18.042: INFO: Waiting for pod downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a to disappear Mar 25 23:45:18.046: INFO: Pod downwardapi-volume-4cd51ed9-533f-4b69-a1b4-466436dd824a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:45:18.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2380" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":415,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:45:18.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 25 23:45:18.189: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2214 /api/v1/namespaces/watch-2214/configmaps/e2e-watch-test-resource-version 316ccae0-6586-4a03-a22e-907e15246e65 2800230 0 2020-03-25 23:45:18 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:45:18.189: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2214 /api/v1/namespaces/watch-2214/configmaps/e2e-watch-test-resource-version 316ccae0-6586-4a03-a22e-907e15246e65 2800231 0 2020-03-25 23:45:18 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:45:18.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2214" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":28,"skipped":416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:45:18.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-e8e0e2fa-4eb1-4ad0-8e65-6d66f97ec696 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:45:18.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7152" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":29,"skipped":464,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:45:18.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:45:22.529: INFO: Waiting up to 5m0s for pod "client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6" in namespace "pods-1050" to be "Succeeded or Failed" Mar 25 23:45:22.535: INFO: Pod "client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.835898ms Mar 25 23:45:24.539: INFO: Pod "client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010193032s Mar 25 23:45:26.544: INFO: Pod "client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014422422s STEP: Saw pod success Mar 25 23:45:26.544: INFO: Pod "client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6" satisfied condition "Succeeded or Failed" Mar 25 23:45:26.547: INFO: Trying to get logs from node latest-worker2 pod client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6 container env3cont: STEP: delete the pod Mar 25 23:45:26.566: INFO: Waiting for pod client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6 to disappear Mar 25 23:45:26.576: INFO: Pod client-envvars-08df84d8-0679-4cff-8f8c-f5b46f2adea6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:45:26.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1050" for this suite. • [SLOW TEST:8.234 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":473,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:45:26.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:45:26.672: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 25 23:45:26.718: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:26.732: INFO: Number of nodes with available pods: 0 Mar 25 23:45:26.732: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:45:27.737: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:27.740: INFO: Number of nodes with available pods: 0 Mar 25 23:45:27.740: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:45:28.859: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:28.862: INFO: Number of nodes with available pods: 0 Mar 25 23:45:28.862: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:45:29.737: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:29.741: INFO: Number of nodes with available pods: 0 Mar 25 23:45:29.741: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:45:30.738: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:30.741: INFO: Number of nodes with available pods: 2 Mar 25 23:45:30.741: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 25 23:45:30.781: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:30.781: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:30.815: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:31.818: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:31.818: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:31.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:32.818: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:32.818: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:32.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:33.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:33.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:33.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:33.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:34.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:34.820: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:34.820: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:34.824: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:35.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:35.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:35.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:35.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:36.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:36.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:36.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:36.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:37.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:37.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:37.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:37.824: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:38.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:38.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:38.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:38.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:39.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:39.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:39.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:39.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:40.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:40.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:40.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:40.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:41.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:41.820: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:41.820: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:41.824: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:42.819: INFO: Wrong image for pod: daemon-set-dwczr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:42.819: INFO: Pod daemon-set-dwczr is not available Mar 25 23:45:42.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:42.829: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:43.820: INFO: Pod daemon-set-ch8tw is not available Mar 25 23:45:43.820: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:43.824: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:44.818: INFO: Pod daemon-set-ch8tw is not available Mar 25 23:45:44.818: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:44.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:45.819: INFO: Pod daemon-set-ch8tw is not available Mar 25 23:45:45.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:45.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:46.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:46.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:47.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:47.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:48.819: INFO: Wrong image for pod: daemon-set-mtq5q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 25 23:45:48.819: INFO: Pod daemon-set-mtq5q is not available Mar 25 23:45:48.824: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:49.819: INFO: Pod daemon-set-w7ckd is not available Mar 25 23:45:49.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 25 23:45:49.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:49.830: INFO: Number of nodes with available pods: 1 Mar 25 23:45:49.830: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:45:50.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:50.887: INFO: Number of nodes with available pods: 1 Mar 25 23:45:50.887: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:45:51.834: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:51.838: INFO: Number of nodes with available pods: 1 Mar 25 23:45:51.838: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:45:52.834: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:45:52.838: INFO: Number of nodes with available pods: 2 Mar 25 23:45:52.838: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5780, will wait for the garbage collector to delete the pods Mar 25 23:45:52.910: INFO: Deleting DaemonSet.extensions daemon-set took: 5.175378ms Mar 25 23:45:53.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.334742ms Mar 25 23:46:02.814: INFO: Number of nodes with available pods: 0 Mar 25 23:46:02.814: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 23:46:02.817: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5780/daemonsets","resourceVersion":"2800538"},"items":null} Mar 25 23:46:02.820: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5780/pods","resourceVersion":"2800538"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:46:02.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5780" for this suite. • [SLOW TEST:36.252 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":31,"skipped":478,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:46:02.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-2dd52478-e5b7-4346-94b0-997de00dc465 STEP: Creating a pod to test consume configMaps Mar 25 23:46:02.944: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553" in namespace "configmap-5686" to be "Succeeded or Failed" Mar 25 23:46:02.958: INFO: Pod "pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553": Phase="Pending", Reason="", readiness=false. Elapsed: 14.270939ms Mar 25 23:46:04.962: INFO: Pod "pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018395083s Mar 25 23:46:06.967: INFO: Pod "pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022679907s STEP: Saw pod success Mar 25 23:46:06.967: INFO: Pod "pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553" satisfied condition "Succeeded or Failed" Mar 25 23:46:06.970: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553 container configmap-volume-test: STEP: delete the pod Mar 25 23:46:07.000: INFO: Waiting for pod pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553 to disappear Mar 25 23:46:07.011: INFO: Pod pod-configmaps-a3d8202e-b6d0-4870-8a05-95a2d7450553 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:46:07.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5686" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":479,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:46:07.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:47:07.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3047" for this suite. • [SLOW TEST:60.084 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:47:07.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 25 23:47:07.201: INFO: Waiting up to 5m0s for pod "pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce" in namespace "emptydir-420" to be "Succeeded or Failed" Mar 25 23:47:07.229: INFO: Pod "pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce": Phase="Pending", Reason="", readiness=false. Elapsed: 27.772315ms Mar 25 23:47:09.232: INFO: Pod "pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031180086s Mar 25 23:47:11.236: INFO: Pod "pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034774959s STEP: Saw pod success Mar 25 23:47:11.236: INFO: Pod "pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce" satisfied condition "Succeeded or Failed" Mar 25 23:47:11.238: INFO: Trying to get logs from node latest-worker2 pod pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce container test-container: STEP: delete the pod Mar 25 23:47:11.281: INFO: Waiting for pod pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce to disappear Mar 25 23:47:11.297: INFO: Pod pod-34887b30-2de0-4693-b9ff-b7b9eb4c79ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:47:11.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-420" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":625,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:47:11.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8jfxm in namespace proxy-59 I0325 23:47:11.476812 7 runners.go:190] Created replication controller with name: proxy-service-8jfxm, namespace: proxy-59, replica count: 1 I0325 23:47:12.527292 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 23:47:13.527559 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 23:47:14.527795 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 23:47:15.528011 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 23:47:16.528265 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 23:47:17.528483 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 23:47:18.528727 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 23:47:19.529029 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 23:47:20.529314 7 runners.go:190] proxy-service-8jfxm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 23:47:20.554: INFO: setup took 9.164071356s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 25 23:47:20.563: INFO: (0) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 8.164335ms) Mar 25 23:47:20.563: INFO: (0) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testtest (200; 8.887079ms) Mar 25 23:47:20.563: INFO: (0) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 8.873371ms) Mar 25 23:47:20.563: INFO: (0) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:1080/proxy/: te... (200; 9.102531ms) Mar 25 23:47:20.563: INFO: (0) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 9.072136ms) Mar 25 23:47:20.564: INFO: (0) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 9.340514ms) Mar 25 23:47:20.564: INFO: (0) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 9.875925ms) Mar 25 23:47:20.566: INFO: (0) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 11.160161ms) Mar 25 23:47:20.571: INFO: (0) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 16.749723ms) Mar 25 23:47:20.571: INFO: (0) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 17.080722ms) Mar 25 23:47:20.571: INFO: (0) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: test (200; 3.662341ms) Mar 25 23:47:20.576: INFO: (1) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.581984ms) Mar 25 23:47:20.576: INFO: (1) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 3.808935ms) Mar 25 23:47:20.576: INFO: (1) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: te... (200; 5.072915ms) Mar 25 23:47:20.577: INFO: (1) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 5.06306ms) Mar 25 23:47:20.577: INFO: (1) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 5.186518ms) Mar 25 23:47:20.577: INFO: (1) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 5.118343ms) Mar 25 23:47:20.577: INFO: (1) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testtestte... (200; 5.031601ms) Mar 25 23:47:20.583: INFO: (2) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 4.282097ms) Mar 25 23:47:20.583: INFO: (2) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 4.570743ms) Mar 25 23:47:20.583: INFO: (2) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.992795ms) Mar 25 23:47:20.583: INFO: (2) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.516824ms) Mar 25 23:47:20.583: INFO: (2) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 5.447771ms) Mar 25 23:47:20.583: INFO: (2) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: te... (200; 3.671127ms) Mar 25 23:47:20.588: INFO: (3) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 4.16638ms) Mar 25 23:47:20.588: INFO: (3) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.177196ms) Mar 25 23:47:20.588: INFO: (3) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.270108ms) Mar 25 23:47:20.588: INFO: (3) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 4.483324ms) Mar 25 23:47:20.588: INFO: (3) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testtest (200; 4.40288ms) Mar 25 23:47:20.593: INFO: (4) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:1080/proxy/: te... (200; 4.416437ms) Mar 25 23:47:20.593: INFO: (4) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.408104ms) Mar 25 23:47:20.593: INFO: (4) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 4.530946ms) Mar 25 23:47:20.600: INFO: (5) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 5.528639ms) Mar 25 23:47:20.600: INFO: (5) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 5.511364ms) Mar 25 23:47:20.600: INFO: (5) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: test (200; 5.60017ms) Mar 25 23:47:20.601: INFO: (5) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 5.874063ms) Mar 25 23:47:20.601: INFO: (5) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 6.053881ms) Mar 25 23:47:20.601: INFO: (5) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testtestte... (200; 5.128302ms) Mar 25 23:47:20.607: INFO: (6) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: test (200; 5.438619ms) Mar 25 23:47:20.607: INFO: (6) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 5.54805ms) Mar 25 23:47:20.608: INFO: (6) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 5.727026ms) Mar 25 23:47:20.608: INFO: (6) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 5.794558ms) Mar 25 23:47:20.613: INFO: (6) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 11.389908ms) Mar 25 23:47:20.613: INFO: (6) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 11.45479ms) Mar 25 23:47:20.613: INFO: (6) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 11.37393ms) Mar 25 23:47:20.617: INFO: (7) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 3.822585ms) Mar 25 23:47:20.617: INFO: (7) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 3.768455ms) Mar 25 23:47:20.617: INFO: (7) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 3.792635ms) Mar 25 23:47:20.618: INFO: (7) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 4.140374ms) Mar 25 23:47:20.618: INFO: (7) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.330388ms) Mar 25 23:47:20.618: INFO: (7) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 4.365059ms) Mar 25 23:47:20.618: INFO: (7) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:1080/proxy/: te... (200; 4.37809ms) Mar 25 23:47:20.618: INFO: (7) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testtest (200; 4.843957ms) Mar 25 23:47:20.623: INFO: (8) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 4.780223ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 4.930572ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 4.9381ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 4.993487ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 4.984912ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 5.038091ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 5.232809ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 5.312346ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 5.542654ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 5.640799ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 5.554884ms) Mar 25 23:47:20.624: INFO: (8) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 5.57234ms) Mar 25 23:47:20.628: INFO: (9) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.13482ms) Mar 25 23:47:20.628: INFO: (9) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.203193ms) Mar 25 23:47:20.628: INFO: (9) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 3.174853ms) Mar 25 23:47:20.629: INFO: (9) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:1080/proxy/: te... (200; 4.027271ms) Mar 25 23:47:20.628: INFO: (9) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.004398ms) Mar 25 23:47:20.628: INFO: (9) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 4.106277ms) Mar 25 23:47:20.629: INFO: (9) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 4.113036ms) Mar 25 23:47:20.629: INFO: (9) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 4.111381ms) Mar 25 23:47:20.629: INFO: (9) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 1.973258ms) Mar 25 23:47:20.633: INFO: (10) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 3.681989ms) Mar 25 23:47:20.633: INFO: (10) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.561136ms) Mar 25 23:47:20.634: INFO: (10) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.765028ms) Mar 25 23:47:20.634: INFO: (10) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 5.076801ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 5.128784ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 5.245761ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 5.328886ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 5.460657ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 5.449276ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 5.587681ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 5.731855ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 5.58996ms) Mar 25 23:47:20.640: INFO: (11) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 5.776717ms) Mar 25 23:47:20.641: INFO: (11) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testte... (200; 5.153834ms) Mar 25 23:47:20.646: INFO: (12) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testtest (200; 5.662218ms) Mar 25 23:47:20.648: INFO: (12) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 6.706955ms) Mar 25 23:47:20.648: INFO: (12) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 6.802948ms) Mar 25 23:47:20.648: INFO: (12) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 6.707147ms) Mar 25 23:47:20.648: INFO: (12) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 6.805927ms) Mar 25 23:47:20.651: INFO: (13) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.564619ms) Mar 25 23:47:20.652: INFO: (13) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.013987ms) Mar 25 23:47:20.652: INFO: (13) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 4.077834ms) Mar 25 23:47:20.652: INFO: (13) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 4.005983ms) Mar 25 23:47:20.652: INFO: (13) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 4.461642ms) Mar 25 23:47:20.652: INFO: (13) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 4.497227ms) Mar 25 23:47:20.652: INFO: (13) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 5.279643ms) Mar 25 23:47:20.654: INFO: (13) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 6.039602ms) Mar 25 23:47:20.654: INFO: (13) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 6.079399ms) Mar 25 23:47:20.655: INFO: (13) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 6.733233ms) Mar 25 23:47:20.655: INFO: (13) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 6.689687ms) Mar 25 23:47:20.655: INFO: (13) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 6.709942ms) Mar 25 23:47:20.658: INFO: (14) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.425459ms) Mar 25 23:47:20.659: INFO: (14) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.771344ms) Mar 25 23:47:20.659: INFO: (14) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testte... (200; 4.263105ms) Mar 25 23:47:20.659: INFO: (14) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 4.246963ms) Mar 25 23:47:20.659: INFO: (14) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 4.16031ms) Mar 25 23:47:20.659: INFO: (14) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 4.200107ms) Mar 25 23:47:20.659: INFO: (14) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: te... (200; 3.389324ms) Mar 25 23:47:20.663: INFO: (15) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.343114ms) Mar 25 23:47:20.663: INFO: (15) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testtest (200; 3.593705ms) Mar 25 23:47:20.663: INFO: (15) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.605715ms) Mar 25 23:47:20.663: INFO: (15) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 3.560247ms) Mar 25 23:47:20.663: INFO: (15) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.654163ms) Mar 25 23:47:20.664: INFO: (15) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 4.229882ms) Mar 25 23:47:20.664: INFO: (15) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 4.447311ms) Mar 25 23:47:20.664: INFO: (15) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 4.554272ms) Mar 25 23:47:20.664: INFO: (15) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 4.682651ms) Mar 25 23:47:20.664: INFO: (15) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 4.604745ms) Mar 25 23:47:20.664: INFO: (15) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 4.676642ms) Mar 25 23:47:20.667: INFO: (16) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 2.653461ms) Mar 25 23:47:20.668: INFO: (16) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 3.239973ms) Mar 25 23:47:20.668: INFO: (16) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 3.71234ms) Mar 25 23:47:20.672: INFO: (16) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 7.851808ms) Mar 25 23:47:20.672: INFO: (16) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 7.734371ms) Mar 25 23:47:20.673: INFO: (16) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 8.023061ms) Mar 25 23:47:20.673: INFO: (16) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 8.279452ms) Mar 25 23:47:20.673: INFO: (16) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 8.400615ms) Mar 25 23:47:20.673: INFO: (16) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 8.423459ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 3.762923ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname1/proxy/: foo (200; 3.813633ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.78123ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.818936ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 3.780335ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 3.848512ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 3.964847ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 4.15184ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 4.220277ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 4.246366ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 4.286073ms) Mar 25 23:47:20.677: INFO: (17) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 4.246004ms) Mar 25 23:47:20.680: INFO: (18) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 2.657126ms) Mar 25 23:47:20.681: INFO: (18) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq/proxy/: test (200; 3.214015ms) Mar 25 23:47:20.681: INFO: (18) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.309637ms) Mar 25 23:47:20.681: INFO: (18) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: testte... (200; 3.725773ms) Mar 25 23:47:20.681: INFO: (18) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 3.793497ms) Mar 25 23:47:20.682: INFO: (18) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 4.256025ms) Mar 25 23:47:20.682: INFO: (18) /api/v1/namespaces/proxy-59/services/proxy-service-8jfxm:portname2/proxy/: bar (200; 4.73305ms) Mar 25 23:47:20.682: INFO: (18) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname2/proxy/: bar (200; 4.725439ms) Mar 25 23:47:20.682: INFO: (18) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname1/proxy/: tls baz (200; 4.935174ms) Mar 25 23:47:20.682: INFO: (18) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 4.944879ms) Mar 25 23:47:20.682: INFO: (18) /api/v1/namespaces/proxy-59/services/https:proxy-service-8jfxm:tlsportname2/proxy/: tls qux (200; 4.954815ms) Mar 25 23:47:20.686: INFO: (19) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 3.508348ms) Mar 25 23:47:20.686: INFO: (19) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:1080/proxy/: te... (200; 3.72375ms) Mar 25 23:47:20.686: INFO: (19) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:462/proxy/: tls qux (200; 3.738044ms) Mar 25 23:47:20.687: INFO: (19) /api/v1/namespaces/proxy-59/pods/http:proxy-service-8jfxm-tp8lq:162/proxy/: bar (200; 4.588434ms) Mar 25 23:47:20.687: INFO: (19) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:460/proxy/: tls baz (200; 4.571204ms) Mar 25 23:47:20.687: INFO: (19) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:1080/proxy/: testtest (200; 4.790889ms) Mar 25 23:47:20.687: INFO: (19) /api/v1/namespaces/proxy-59/services/http:proxy-service-8jfxm:portname1/proxy/: foo (200; 4.794065ms) Mar 25 23:47:20.687: INFO: (19) /api/v1/namespaces/proxy-59/pods/proxy-service-8jfxm-tp8lq:160/proxy/: foo (200; 4.891005ms) Mar 25 23:47:20.687: INFO: (19) /api/v1/namespaces/proxy-59/pods/https:proxy-service-8jfxm-tp8lq:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 25 23:47:23.741: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3909 /api/v1/namespaces/watch-3909/configmaps/e2e-watch-test-watch-closed 1301ba89-3964-45c2-adf4-6fdc7bd8feaf 2800922 0 2020-03-25 23:47:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:47:23.742: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3909 /api/v1/namespaces/watch-3909/configmaps/e2e-watch-test-watch-closed 1301ba89-3964-45c2-adf4-6fdc7bd8feaf 2800923 0 2020-03-25 23:47:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 25 23:47:23.764: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3909 /api/v1/namespaces/watch-3909/configmaps/e2e-watch-test-watch-closed 1301ba89-3964-45c2-adf4-6fdc7bd8feaf 2800924 0 2020-03-25 23:47:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:47:23.764: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3909 /api/v1/namespaces/watch-3909/configmaps/e2e-watch-test-watch-closed 1301ba89-3964-45c2-adf4-6fdc7bd8feaf 2800925 0 2020-03-25 23:47:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:47:23.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3909" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":36,"skipped":680,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:47:23.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-3f739ba3-581f-41b8-8537-4e668b4ec14a STEP: Creating a pod to test consume configMaps Mar 25 23:47:23.853: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632" in namespace "projected-1076" to be "Succeeded or Failed" Mar 25 23:47:23.858: INFO: Pod "pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231664ms Mar 25 23:47:25.863: INFO: Pod "pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009349826s Mar 25 23:47:27.866: INFO: Pod "pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012377187s STEP: Saw pod success Mar 25 23:47:27.866: INFO: Pod "pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632" satisfied condition "Succeeded or Failed" Mar 25 23:47:27.868: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632 container projected-configmap-volume-test: STEP: delete the pod Mar 25 23:47:27.902: INFO: Waiting for pod pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632 to disappear Mar 25 23:47:27.912: INFO: Pod pod-projected-configmaps-9eb0498d-52a0-4f1d-88f4-99bfe51c6632 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:47:27.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1076" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":698,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:47:27.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 25 23:47:28.005: INFO: Created pod &Pod{ObjectMeta:{dns-7452 dns-7452 /api/v1/namespaces/dns-7452/pods/dns-7452 30e547a5-a602-40ce-bf0f-f9037fbe81db 2800959 0 2020-03-25 23:47:28 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qt64l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qt64l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qt64l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 23:47:28.008: INFO: The status of Pod dns-7452 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:47:30.011: INFO: The status of Pod dns-7452 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:47:32.013: INFO: The status of Pod dns-7452 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 25 23:47:32.013: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7452 PodName:dns-7452 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:47:32.013: INFO: >>> kubeConfig: /root/.kube/config I0325 23:47:32.049403 7 log.go:172] (0xc002a8ca50) (0xc000e46320) Create stream I0325 23:47:32.049443 7 log.go:172] (0xc002a8ca50) (0xc000e46320) Stream added, broadcasting: 1 I0325 23:47:32.051410 7 log.go:172] (0xc002a8ca50) Reply frame received for 1 I0325 23:47:32.051446 7 log.go:172] (0xc002a8ca50) (0xc000e46460) Create stream I0325 23:47:32.051456 7 log.go:172] (0xc002a8ca50) (0xc000e46460) Stream added, broadcasting: 3 I0325 23:47:32.052686 7 log.go:172] (0xc002a8ca50) Reply frame received for 3 I0325 23:47:32.052758 7 log.go:172] (0xc002a8ca50) (0xc000efed20) Create stream I0325 23:47:32.052791 7 log.go:172] (0xc002a8ca50) (0xc000efed20) Stream added, broadcasting: 5 I0325 23:47:32.054179 7 log.go:172] (0xc002a8ca50) Reply frame received for 5 I0325 23:47:32.151941 7 log.go:172] (0xc002a8ca50) Data frame received for 3 I0325 23:47:32.151967 7 log.go:172] (0xc000e46460) (3) Data frame handling I0325 23:47:32.151992 7 log.go:172] (0xc000e46460) (3) Data frame sent I0325 23:47:32.152489 7 log.go:172] (0xc002a8ca50) Data frame received for 3 I0325 23:47:32.152530 7 log.go:172] (0xc000e46460) (3) Data frame handling I0325 23:47:32.152840 7 log.go:172] (0xc002a8ca50) Data frame received for 5 I0325 23:47:32.152871 7 log.go:172] (0xc000efed20) (5) Data frame handling I0325 23:47:32.155147 7 log.go:172] (0xc002a8ca50) Data frame received for 1 I0325 23:47:32.155180 7 log.go:172] (0xc000e46320) (1) Data frame handling I0325 23:47:32.155210 7 log.go:172] (0xc000e46320) (1) Data frame sent I0325 23:47:32.155238 7 log.go:172] (0xc002a8ca50) (0xc000e46320) Stream removed, broadcasting: 1 I0325 23:47:32.155275 7 log.go:172] (0xc002a8ca50) Go away received I0325 23:47:32.155372 7 log.go:172] (0xc002a8ca50) (0xc000e46320) Stream removed, broadcasting: 1 I0325 23:47:32.155401 7 log.go:172] (0xc002a8ca50) (0xc000e46460) Stream removed, broadcasting: 3 I0325 23:47:32.155413 7 log.go:172] (0xc002a8ca50) (0xc000efed20) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 25 23:47:32.155: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7452 PodName:dns-7452 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:47:32.155: INFO: >>> kubeConfig: /root/.kube/config I0325 23:47:32.187749 7 log.go:172] (0xc002c26420) (0xc0010e66e0) Create stream I0325 23:47:32.187782 7 log.go:172] (0xc002c26420) (0xc0010e66e0) Stream added, broadcasting: 1 I0325 23:47:32.189522 7 log.go:172] (0xc002c26420) Reply frame received for 1 I0325 23:47:32.189549 7 log.go:172] (0xc002c26420) (0xc000fb0640) Create stream I0325 23:47:32.189560 7 log.go:172] (0xc002c26420) (0xc000fb0640) Stream added, broadcasting: 3 I0325 23:47:32.190316 7 log.go:172] (0xc002c26420) Reply frame received for 3 I0325 23:47:32.190350 7 log.go:172] (0xc002c26420) (0xc0012c0780) Create stream I0325 23:47:32.190362 7 log.go:172] (0xc002c26420) (0xc0012c0780) Stream added, broadcasting: 5 I0325 23:47:32.191175 7 log.go:172] (0xc002c26420) Reply frame received for 5 I0325 23:47:32.266669 7 log.go:172] (0xc002c26420) Data frame received for 3 I0325 23:47:32.266713 7 log.go:172] (0xc000fb0640) (3) Data frame handling I0325 23:47:32.266740 7 log.go:172] (0xc000fb0640) (3) Data frame sent I0325 23:47:32.267376 7 log.go:172] (0xc002c26420) Data frame received for 5 I0325 23:47:32.267424 7 log.go:172] (0xc0012c0780) (5) Data frame handling I0325 23:47:32.267462 7 log.go:172] (0xc002c26420) Data frame received for 3 I0325 23:47:32.267483 7 log.go:172] (0xc000fb0640) (3) Data frame handling I0325 23:47:32.268833 7 log.go:172] (0xc002c26420) Data frame received for 1 I0325 23:47:32.268868 7 log.go:172] (0xc0010e66e0) (1) Data frame handling I0325 23:47:32.268894 7 log.go:172] (0xc0010e66e0) (1) Data frame sent I0325 23:47:32.268918 7 log.go:172] (0xc002c26420) (0xc0010e66e0) Stream removed, broadcasting: 1 I0325 23:47:32.268941 7 log.go:172] (0xc002c26420) Go away received I0325 23:47:32.269035 7 log.go:172] (0xc002c26420) (0xc0010e66e0) Stream removed, broadcasting: 1 I0325 23:47:32.269052 7 log.go:172] (0xc002c26420) (0xc000fb0640) Stream removed, broadcasting: 3 I0325 23:47:32.269064 7 log.go:172] (0xc002c26420) (0xc0012c0780) Stream removed, broadcasting: 5 Mar 25 23:47:32.269: INFO: Deleting pod dns-7452... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:47:32.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7452" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":38,"skipped":710,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:47:32.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6224.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 23:47:38.734: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.737: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.740: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.744: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.754: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.757: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.759: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.762: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:38.768: INFO: Lookups using dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local] Mar 25 23:47:43.774: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.777: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.781: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.784: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.794: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.798: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.801: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.804: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:43.811: INFO: Lookups using dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local] Mar 25 23:47:48.773: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.776: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.780: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.783: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.793: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.796: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.849: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.852: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:48.859: INFO: Lookups using dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local] Mar 25 23:47:53.773: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.777: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.781: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.784: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.794: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.797: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.799: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.802: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:53.809: INFO: Lookups using dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local] Mar 25 23:47:58.773: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.777: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.780: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.784: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.794: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.798: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.801: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.804: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:47:58.830: INFO: Lookups using dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local] Mar 25 23:48:03.773: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.777: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.780: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.784: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.794: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.797: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.800: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.803: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local from pod dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf: the server could not find the requested resource (get pods dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf) Mar 25 23:48:03.810: INFO: Lookups using dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6224.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6224.svc.cluster.local jessie_udp@dns-test-service-2.dns-6224.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6224.svc.cluster.local] Mar 25 23:48:08.827: INFO: DNS probes using dns-6224/dns-test-ecdb752a-74c4-448c-8d97-c927bb2067bf succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:48:09.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6224" for this suite. • [SLOW TEST:37.079 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":39,"skipped":720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:48:09.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1598.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1598.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 191.16.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.16.191_udp@PTR;check="$$(dig +tcp +noall +answer +search 191.16.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.16.191_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1598.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1598.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 191.16.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.16.191_udp@PTR;check="$$(dig +tcp +noall +answer +search 191.16.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.16.191_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 23:48:15.544: INFO: Unable to read wheezy_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:15.548: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:15.551: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:15.575: INFO: Unable to read jessie_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:15.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:15.580: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:15.605: INFO: Lookups using dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0 failed for: [wheezy_udp@dns-test-service.dns-1598.svc.cluster.local wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local jessie_udp@dns-test-service.dns-1598.svc.cluster.local jessie_tcp@dns-test-service.dns-1598.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1598.svc.cluster.local] Mar 25 23:48:20.609: INFO: Unable to read wheezy_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:20.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:20.639: INFO: Unable to read jessie_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:20.641: INFO: Unable to read jessie_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:20.663: INFO: Lookups using dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0 failed for: [wheezy_udp@dns-test-service.dns-1598.svc.cluster.local wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local jessie_udp@dns-test-service.dns-1598.svc.cluster.local jessie_tcp@dns-test-service.dns-1598.svc.cluster.local] Mar 25 23:48:25.614: INFO: Unable to read wheezy_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:25.616: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:25.644: INFO: Unable to read jessie_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:25.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:25.687: INFO: Lookups using dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0 failed for: [wheezy_udp@dns-test-service.dns-1598.svc.cluster.local wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local jessie_udp@dns-test-service.dns-1598.svc.cluster.local jessie_tcp@dns-test-service.dns-1598.svc.cluster.local] Mar 25 23:48:30.609: INFO: Unable to read wheezy_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:30.611: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:30.635: INFO: Unable to read jessie_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:30.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:30.657: INFO: Lookups using dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0 failed for: [wheezy_udp@dns-test-service.dns-1598.svc.cluster.local wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local jessie_udp@dns-test-service.dns-1598.svc.cluster.local jessie_tcp@dns-test-service.dns-1598.svc.cluster.local] Mar 25 23:48:35.610: INFO: Unable to read wheezy_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:35.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:35.644: INFO: Unable to read jessie_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:35.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:35.672: INFO: Lookups using dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0 failed for: [wheezy_udp@dns-test-service.dns-1598.svc.cluster.local wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local jessie_udp@dns-test-service.dns-1598.svc.cluster.local jessie_tcp@dns-test-service.dns-1598.svc.cluster.local] Mar 25 23:48:40.609: INFO: Unable to read wheezy_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:40.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:40.639: INFO: Unable to read jessie_udp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:40.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-1598.svc.cluster.local from pod dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0: the server could not find the requested resource (get pods dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0) Mar 25 23:48:40.665: INFO: Lookups using dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0 failed for: [wheezy_udp@dns-test-service.dns-1598.svc.cluster.local wheezy_tcp@dns-test-service.dns-1598.svc.cluster.local jessie_udp@dns-test-service.dns-1598.svc.cluster.local jessie_tcp@dns-test-service.dns-1598.svc.cluster.local] Mar 25 23:48:45.668: INFO: DNS probes using dns-1598/dns-test-8bf9f436-4350-4f01-8602-a7ecc97f54c0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:48:46.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1598" for this suite. • [SLOW TEST:36.882 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":40,"skipped":743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:48:46.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Mar 25 23:48:46.359: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8844" to be "Succeeded or Failed" Mar 25 23:48:46.386: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.494516ms Mar 25 23:48:48.523: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163271977s Mar 25 23:48:50.526: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166947049s Mar 25 23:48:52.531: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.171915373s STEP: Saw pod success Mar 25 23:48:52.531: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 25 23:48:52.534: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 25 23:48:52.580: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 23:48:52.592: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:48:52.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8844" for this suite. • [SLOW TEST:6.399 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":769,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:48:52.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 25 23:48:57.344: INFO: Successfully updated pod "adopt-release-nwxml" STEP: Checking that the Job readopts the Pod Mar 25 23:48:57.344: INFO: Waiting up to 15m0s for pod "adopt-release-nwxml" in namespace "job-4416" to be "adopted" Mar 25 23:48:57.351: INFO: Pod "adopt-release-nwxml": Phase="Running", Reason="", readiness=true. Elapsed: 7.135974ms Mar 25 23:48:59.355: INFO: Pod "adopt-release-nwxml": Phase="Running", Reason="", readiness=true. Elapsed: 2.011609685s Mar 25 23:48:59.355: INFO: Pod "adopt-release-nwxml" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 25 23:48:59.864: INFO: Successfully updated pod "adopt-release-nwxml" STEP: Checking that the Job releases the Pod Mar 25 23:48:59.864: INFO: Waiting up to 15m0s for pod "adopt-release-nwxml" in namespace "job-4416" to be "released" Mar 25 23:48:59.872: INFO: Pod "adopt-release-nwxml": Phase="Running", Reason="", readiness=true. Elapsed: 8.160765ms Mar 25 23:49:01.876: INFO: Pod "adopt-release-nwxml": Phase="Running", Reason="", readiness=true. Elapsed: 2.01199883s Mar 25 23:49:01.876: INFO: Pod "adopt-release-nwxml" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:49:01.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4416" for this suite. • [SLOW TEST:9.230 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":42,"skipped":771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:49:01.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 25 23:49:06.704: INFO: Successfully updated pod "annotationupdate5298f836-9915-4fd1-950d-ddedff1410db" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:49:08.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1204" for this suite. • [SLOW TEST:6.844 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":828,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:49:08.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Mar 25 23:49:08.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1334' Mar 25 23:49:11.589: INFO: stderr: "" Mar 25 23:49:11.589: INFO: stdout: "pod/pause created\n" Mar 25 23:49:11.589: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 25 23:49:11.590: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1334" to be "running and ready" Mar 25 23:49:11.599: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.232734ms Mar 25 23:49:13.602: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012635553s Mar 25 23:49:15.606: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.016832527s Mar 25 23:49:15.606: INFO: Pod "pause" satisfied condition "running and ready" Mar 25 23:49:15.607: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Mar 25 23:49:15.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1334' Mar 25 23:49:15.704: INFO: stderr: "" Mar 25 23:49:15.704: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 25 23:49:15.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1334' Mar 25 23:49:15.794: INFO: stderr: "" Mar 25 23:49:15.794: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 25 23:49:15.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1334' Mar 25 23:49:15.900: INFO: stderr: "" Mar 25 23:49:15.900: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 25 23:49:15.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1334' Mar 25 23:49:15.991: INFO: stderr: "" Mar 25 23:49:15.991: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Mar 25 23:49:15.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1334' Mar 25 23:49:16.129: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 23:49:16.129: INFO: stdout: "pod \"pause\" force deleted\n" Mar 25 23:49:16.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1334' Mar 25 23:49:16.222: INFO: stderr: "No resources found in kubectl-1334 namespace.\n" Mar 25 23:49:16.222: INFO: stdout: "" Mar 25 23:49:16.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1334 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 23:49:16.412: INFO: stderr: "" Mar 25 23:49:16.412: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:49:16.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1334" for this suite. • [SLOW TEST:7.690 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":44,"skipped":838,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:49:16.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-5f2f0b63-d90e-4117-8f48-3a46ce7a98a6 STEP: Creating a pod to test consume secrets Mar 25 23:49:16.636: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6" in namespace "projected-6104" to be "Succeeded or Failed" Mar 25 23:49:16.640: INFO: Pod "pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530897ms Mar 25 23:49:18.644: INFO: Pod "pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008387168s Mar 25 23:49:20.648: INFO: Pod "pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012492578s STEP: Saw pod success Mar 25 23:49:20.648: INFO: Pod "pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6" satisfied condition "Succeeded or Failed" Mar 25 23:49:20.651: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6 container projected-secret-volume-test: STEP: delete the pod Mar 25 23:49:20.671: INFO: Waiting for pod pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6 to disappear Mar 25 23:49:20.675: INFO: Pod pod-projected-secrets-0a2b016e-b47a-4125-b702-47817d5607f6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:49:20.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6104" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:49:20.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7774 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 23:49:20.742: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 23:49:20.801: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:49:22.825: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:49:24.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:49:26.806: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:49:28.806: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:49:30.806: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:49:32.806: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:49:34.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 23:49:36.806: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 23:49:36.812: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 23:49:38.816: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 23:49:40.816: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 23:49:44.837: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.71:8080/dial?request=hostname&protocol=http&host=10.244.2.70&port=8080&tries=1'] Namespace:pod-network-test-7774 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:49:44.837: INFO: >>> kubeConfig: /root/.kube/config I0325 23:49:44.866596 7 log.go:172] (0xc002b73760) (0xc0020d9b80) Create stream I0325 23:49:44.866631 7 log.go:172] (0xc002b73760) (0xc0020d9b80) Stream added, broadcasting: 1 I0325 23:49:44.868379 7 log.go:172] (0xc002b73760) Reply frame received for 1 I0325 23:49:44.868416 7 log.go:172] (0xc002b73760) (0xc000d959a0) Create stream I0325 23:49:44.868428 7 log.go:172] (0xc002b73760) (0xc000d959a0) Stream added, broadcasting: 3 I0325 23:49:44.869689 7 log.go:172] (0xc002b73760) Reply frame received for 3 I0325 23:49:44.869725 7 log.go:172] (0xc002b73760) (0xc000d95a40) Create stream I0325 23:49:44.869735 7 log.go:172] (0xc002b73760) (0xc000d95a40) Stream added, broadcasting: 5 I0325 23:49:44.870517 7 log.go:172] (0xc002b73760) Reply frame received for 5 I0325 23:49:44.945040 7 log.go:172] (0xc002b73760) Data frame received for 3 I0325 23:49:44.945067 7 log.go:172] (0xc000d959a0) (3) Data frame handling I0325 23:49:44.945086 7 log.go:172] (0xc000d959a0) (3) Data frame sent I0325 23:49:44.945781 7 log.go:172] (0xc002b73760) Data frame received for 5 I0325 23:49:44.945811 7 log.go:172] (0xc000d95a40) (5) Data frame handling I0325 23:49:44.945879 7 log.go:172] (0xc002b73760) Data frame received for 3 I0325 23:49:44.945902 7 log.go:172] (0xc000d959a0) (3) Data frame handling I0325 23:49:44.947821 7 log.go:172] (0xc002b73760) Data frame received for 1 I0325 23:49:44.947836 7 log.go:172] (0xc0020d9b80) (1) Data frame handling I0325 23:49:44.947862 7 log.go:172] (0xc0020d9b80) (1) Data frame sent I0325 23:49:44.947889 7 log.go:172] (0xc002b73760) (0xc0020d9b80) Stream removed, broadcasting: 1 I0325 23:49:44.947957 7 log.go:172] (0xc002b73760) (0xc0020d9b80) Stream removed, broadcasting: 1 I0325 23:49:44.947977 7 log.go:172] (0xc002b73760) (0xc000d959a0) Stream removed, broadcasting: 3 I0325 23:49:44.947993 7 log.go:172] (0xc002b73760) (0xc000d95a40) Stream removed, broadcasting: 5 Mar 25 23:49:44.948: INFO: Waiting for responses: map[] I0325 23:49:44.948313 7 log.go:172] (0xc002b73760) Go away received Mar 25 23:49:44.951: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.71:8080/dial?request=hostname&protocol=http&host=10.244.1.212&port=8080&tries=1'] Namespace:pod-network-test-7774 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:49:44.951: INFO: >>> kubeConfig: /root/.kube/config I0325 23:49:44.979285 7 log.go:172] (0xc002dea420) (0xc0016de000) Create stream I0325 23:49:44.979313 7 log.go:172] (0xc002dea420) (0xc0016de000) Stream added, broadcasting: 1 I0325 23:49:44.981829 7 log.go:172] (0xc002dea420) Reply frame received for 1 I0325 23:49:44.981868 7 log.go:172] (0xc002dea420) (0xc001b49c20) Create stream I0325 23:49:44.981883 7 log.go:172] (0xc002dea420) (0xc001b49c20) Stream added, broadcasting: 3 I0325 23:49:44.982868 7 log.go:172] (0xc002dea420) Reply frame received for 3 I0325 23:49:44.982909 7 log.go:172] (0xc002dea420) (0xc001f4abe0) Create stream I0325 23:49:44.982923 7 log.go:172] (0xc002dea420) (0xc001f4abe0) Stream added, broadcasting: 5 I0325 23:49:44.983869 7 log.go:172] (0xc002dea420) Reply frame received for 5 I0325 23:49:45.047868 7 log.go:172] (0xc002dea420) Data frame received for 3 I0325 23:49:45.047894 7 log.go:172] (0xc001b49c20) (3) Data frame handling I0325 23:49:45.047911 7 log.go:172] (0xc001b49c20) (3) Data frame sent I0325 23:49:45.048493 7 log.go:172] (0xc002dea420) Data frame received for 3 I0325 23:49:45.048523 7 log.go:172] (0xc002dea420) Data frame received for 5 I0325 23:49:45.048546 7 log.go:172] (0xc001f4abe0) (5) Data frame handling I0325 23:49:45.048569 7 log.go:172] (0xc001b49c20) (3) Data frame handling I0325 23:49:45.050176 7 log.go:172] (0xc002dea420) Data frame received for 1 I0325 23:49:45.050198 7 log.go:172] (0xc0016de000) (1) Data frame handling I0325 23:49:45.050211 7 log.go:172] (0xc0016de000) (1) Data frame sent I0325 23:49:45.050304 7 log.go:172] (0xc002dea420) (0xc0016de000) Stream removed, broadcasting: 1 I0325 23:49:45.050330 7 log.go:172] (0xc002dea420) Go away received I0325 23:49:45.050418 7 log.go:172] (0xc002dea420) (0xc0016de000) Stream removed, broadcasting: 1 I0325 23:49:45.050440 7 log.go:172] (0xc002dea420) (0xc001b49c20) Stream removed, broadcasting: 3 I0325 23:49:45.050454 7 log.go:172] (0xc002dea420) (0xc001f4abe0) Stream removed, broadcasting: 5 Mar 25 23:49:45.050: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:49:45.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7774" for this suite. • [SLOW TEST:24.375 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":868,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:49:45.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 23:49:45.514: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 23:49:47.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776985, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776985, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776985, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720776985, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 23:49:50.554: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:49:50.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6699" for this suite. STEP: Destroying namespace "webhook-6699-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.176 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":47,"skipped":869,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:49:51.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:49:51.477: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5459 I0325 23:49:51.501728 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5459, replica count: 1 I0325 23:49:52.552167 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 23:49:53.552390 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 23:49:54.552660 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 23:49:55.552926 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 23:49:55.679: INFO: Created: latency-svc-74t6f Mar 25 23:49:55.685: INFO: Got endpoints: latency-svc-74t6f [32.343451ms] Mar 25 23:49:55.753: INFO: Created: latency-svc-kf7sf Mar 25 23:49:55.767: INFO: Got endpoints: latency-svc-kf7sf [82.346992ms] Mar 25 23:49:55.783: INFO: Created: latency-svc-jp2cr Mar 25 23:49:55.798: INFO: Got endpoints: latency-svc-jp2cr [112.231493ms] Mar 25 23:49:55.825: INFO: Created: latency-svc-8hm5q Mar 25 23:49:55.856: INFO: Got endpoints: latency-svc-8hm5q [170.822994ms] Mar 25 23:49:55.871: INFO: Created: latency-svc-8dgnf Mar 25 23:49:55.887: INFO: Got endpoints: latency-svc-8dgnf [202.236206ms] Mar 25 23:49:55.907: INFO: Created: latency-svc-bf429 Mar 25 23:49:55.916: INFO: Got endpoints: latency-svc-bf429 [231.104135ms] Mar 25 23:49:55.931: INFO: Created: latency-svc-sxg77 Mar 25 23:49:55.940: INFO: Got endpoints: latency-svc-sxg77 [254.597273ms] Mar 25 23:49:56.006: INFO: Created: latency-svc-8z729 Mar 25 23:49:56.009: INFO: Got endpoints: latency-svc-8z729 [323.457532ms] Mar 25 23:49:56.053: INFO: Created: latency-svc-r7bdw Mar 25 23:49:56.066: INFO: Got endpoints: latency-svc-r7bdw [380.740744ms] Mar 25 23:49:56.083: INFO: Created: latency-svc-4ndp2 Mar 25 23:49:56.104: INFO: Got endpoints: latency-svc-4ndp2 [418.72248ms] Mar 25 23:49:56.149: INFO: Created: latency-svc-9hngb Mar 25 23:49:56.156: INFO: Got endpoints: latency-svc-9hngb [470.474936ms] Mar 25 23:49:56.183: INFO: Created: latency-svc-7rwqr Mar 25 23:49:56.192: INFO: Got endpoints: latency-svc-7rwqr [506.606181ms] Mar 25 23:49:56.221: INFO: Created: latency-svc-2ffr4 Mar 25 23:49:56.241: INFO: Got endpoints: latency-svc-2ffr4 [556.242896ms] Mar 25 23:49:56.287: INFO: Created: latency-svc-7jljg Mar 25 23:49:56.311: INFO: Created: latency-svc-wvv9j Mar 25 23:49:56.311: INFO: Got endpoints: latency-svc-7jljg [625.717267ms] Mar 25 23:49:56.325: INFO: Got endpoints: latency-svc-wvv9j [639.278249ms] Mar 25 23:49:56.351: INFO: Created: latency-svc-jwb2x Mar 25 23:49:56.419: INFO: Got endpoints: latency-svc-jwb2x [733.181725ms] Mar 25 23:49:56.447: INFO: Created: latency-svc-zfsfh Mar 25 23:49:56.471: INFO: Got endpoints: latency-svc-zfsfh [703.383782ms] Mar 25 23:49:56.491: INFO: Created: latency-svc-8vf4q Mar 25 23:49:56.504: INFO: Got endpoints: latency-svc-8vf4q [706.474453ms] Mar 25 23:49:56.575: INFO: Created: latency-svc-mfh5g Mar 25 23:49:56.603: INFO: Got endpoints: latency-svc-mfh5g [746.646386ms] Mar 25 23:49:56.604: INFO: Created: latency-svc-x98vd Mar 25 23:49:56.617: INFO: Got endpoints: latency-svc-x98vd [729.786225ms] Mar 25 23:49:56.651: INFO: Created: latency-svc-hppt4 Mar 25 23:49:56.665: INFO: Got endpoints: latency-svc-hppt4 [749.118223ms] Mar 25 23:49:56.724: INFO: Created: latency-svc-b8485 Mar 25 23:49:56.749: INFO: Got endpoints: latency-svc-b8485 [808.974681ms] Mar 25 23:49:56.795: INFO: Created: latency-svc-hp2lf Mar 25 23:49:56.809: INFO: Got endpoints: latency-svc-hp2lf [800.283273ms] Mar 25 23:49:56.858: INFO: Created: latency-svc-7cqtx Mar 25 23:49:56.893: INFO: Got endpoints: latency-svc-7cqtx [826.889933ms] Mar 25 23:49:56.929: INFO: Created: latency-svc-8dgv9 Mar 25 23:49:56.941: INFO: Got endpoints: latency-svc-8dgv9 [836.684829ms] Mar 25 23:49:56.982: INFO: Created: latency-svc-tkn6q Mar 25 23:49:56.989: INFO: Got endpoints: latency-svc-tkn6q [833.327601ms] Mar 25 23:49:57.017: INFO: Created: latency-svc-k6rnb Mar 25 23:49:57.031: INFO: Got endpoints: latency-svc-k6rnb [839.397551ms] Mar 25 23:49:57.058: INFO: Created: latency-svc-tcxh9 Mar 25 23:49:57.073: INFO: Got endpoints: latency-svc-tcxh9 [832.034058ms] Mar 25 23:49:57.127: INFO: Created: latency-svc-r4bjm Mar 25 23:49:57.139: INFO: Got endpoints: latency-svc-r4bjm [827.979358ms] Mar 25 23:49:57.163: INFO: Created: latency-svc-8x2lt Mar 25 23:49:57.187: INFO: Got endpoints: latency-svc-8x2lt [861.988382ms] Mar 25 23:49:57.245: INFO: Created: latency-svc-rldm9 Mar 25 23:49:57.253: INFO: Got endpoints: latency-svc-rldm9 [834.305333ms] Mar 25 23:49:57.274: INFO: Created: latency-svc-l7rrs Mar 25 23:49:57.289: INFO: Got endpoints: latency-svc-l7rrs [818.091847ms] Mar 25 23:49:57.311: INFO: Created: latency-svc-vcdjm Mar 25 23:49:57.328: INFO: Got endpoints: latency-svc-vcdjm [823.493543ms] Mar 25 23:49:57.389: INFO: Created: latency-svc-zxhpl Mar 25 23:49:57.396: INFO: Got endpoints: latency-svc-zxhpl [793.116652ms] Mar 25 23:49:57.433: INFO: Created: latency-svc-srmkw Mar 25 23:49:57.460: INFO: Got endpoints: latency-svc-srmkw [842.999786ms] Mar 25 23:49:57.521: INFO: Created: latency-svc-5ft8p Mar 25 23:49:57.528: INFO: Got endpoints: latency-svc-5ft8p [862.678323ms] Mar 25 23:49:57.546: INFO: Created: latency-svc-7vtw6 Mar 25 23:49:57.558: INFO: Got endpoints: latency-svc-7vtw6 [808.722241ms] Mar 25 23:49:57.583: INFO: Created: latency-svc-wsnr9 Mar 25 23:49:57.594: INFO: Got endpoints: latency-svc-wsnr9 [784.420626ms] Mar 25 23:49:57.619: INFO: Created: latency-svc-xl7kn Mar 25 23:49:57.647: INFO: Got endpoints: latency-svc-xl7kn [753.348241ms] Mar 25 23:49:57.676: INFO: Created: latency-svc-mqd7w Mar 25 23:49:57.690: INFO: Got endpoints: latency-svc-mqd7w [748.736893ms] Mar 25 23:49:57.737: INFO: Created: latency-svc-mnh2r Mar 25 23:49:57.766: INFO: Got endpoints: latency-svc-mnh2r [776.893498ms] Mar 25 23:49:57.792: INFO: Created: latency-svc-7t64j Mar 25 23:49:57.804: INFO: Got endpoints: latency-svc-7t64j [772.912409ms] Mar 25 23:49:57.829: INFO: Created: latency-svc-c5qdn Mar 25 23:49:57.840: INFO: Got endpoints: latency-svc-c5qdn [766.668662ms] Mar 25 23:49:57.898: INFO: Created: latency-svc-hj8g4 Mar 25 23:49:57.912: INFO: Got endpoints: latency-svc-hj8g4 [773.036496ms] Mar 25 23:49:57.935: INFO: Created: latency-svc-v5k4s Mar 25 23:49:57.948: INFO: Got endpoints: latency-svc-v5k4s [761.097493ms] Mar 25 23:49:57.982: INFO: Created: latency-svc-xcds5 Mar 25 23:49:57.996: INFO: Got endpoints: latency-svc-xcds5 [743.202977ms] Mar 25 23:49:58.062: INFO: Created: latency-svc-ppgkh Mar 25 23:49:58.068: INFO: Got endpoints: latency-svc-ppgkh [778.688645ms] Mar 25 23:49:58.086: INFO: Created: latency-svc-sbckq Mar 25 23:49:58.103: INFO: Got endpoints: latency-svc-sbckq [775.289042ms] Mar 25 23:49:58.127: INFO: Created: latency-svc-wn66d Mar 25 23:49:58.203: INFO: Got endpoints: latency-svc-wn66d [807.162661ms] Mar 25 23:49:58.237: INFO: Created: latency-svc-9vsjk Mar 25 23:49:58.253: INFO: Got endpoints: latency-svc-9vsjk [792.916689ms] Mar 25 23:49:58.278: INFO: Created: latency-svc-th4bg Mar 25 23:49:58.295: INFO: Got endpoints: latency-svc-th4bg [766.771425ms] Mar 25 23:49:58.353: INFO: Created: latency-svc-z5qt4 Mar 25 23:49:58.396: INFO: Got endpoints: latency-svc-z5qt4 [838.542833ms] Mar 25 23:49:58.432: INFO: Created: latency-svc-drd7q Mar 25 23:49:58.445: INFO: Got endpoints: latency-svc-drd7q [851.462908ms] Mar 25 23:49:58.517: INFO: Created: latency-svc-z6tsg Mar 25 23:49:58.543: INFO: Got endpoints: latency-svc-z6tsg [896.306246ms] Mar 25 23:49:58.543: INFO: Created: latency-svc-6bsxb Mar 25 23:49:58.553: INFO: Got endpoints: latency-svc-6bsxb [863.175257ms] Mar 25 23:49:58.576: INFO: Created: latency-svc-fl2w8 Mar 25 23:49:58.601: INFO: Got endpoints: latency-svc-fl2w8 [834.271511ms] Mar 25 23:49:58.665: INFO: Created: latency-svc-r48tk Mar 25 23:49:58.691: INFO: Got endpoints: latency-svc-r48tk [886.252365ms] Mar 25 23:49:58.691: INFO: Created: latency-svc-7gxbm Mar 25 23:49:58.711: INFO: Got endpoints: latency-svc-7gxbm [870.440425ms] Mar 25 23:49:58.730: INFO: Created: latency-svc-pmtnw Mar 25 23:49:58.739: INFO: Got endpoints: latency-svc-pmtnw [826.736057ms] Mar 25 23:49:58.756: INFO: Created: latency-svc-4k8mc Mar 25 23:49:58.802: INFO: Got endpoints: latency-svc-4k8mc [854.037415ms] Mar 25 23:49:58.848: INFO: Created: latency-svc-zdzfm Mar 25 23:49:58.864: INFO: Got endpoints: latency-svc-zdzfm [867.449082ms] Mar 25 23:49:58.885: INFO: Created: latency-svc-hw2rj Mar 25 23:49:58.900: INFO: Got endpoints: latency-svc-hw2rj [831.779032ms] Mar 25 23:49:58.945: INFO: Created: latency-svc-b6v4c Mar 25 23:49:58.960: INFO: Got endpoints: latency-svc-b6v4c [856.647241ms] Mar 25 23:49:58.978: INFO: Created: latency-svc-vljzm Mar 25 23:49:58.990: INFO: Got endpoints: latency-svc-vljzm [786.331083ms] Mar 25 23:49:59.008: INFO: Created: latency-svc-tjzv6 Mar 25 23:49:59.042: INFO: Got endpoints: latency-svc-tjzv6 [788.319384ms] Mar 25 23:49:59.044: INFO: Created: latency-svc-mk2wr Mar 25 23:49:59.062: INFO: Got endpoints: latency-svc-mk2wr [766.777637ms] Mar 25 23:49:59.083: INFO: Created: latency-svc-j9dc8 Mar 25 23:49:59.098: INFO: Got endpoints: latency-svc-j9dc8 [701.266363ms] Mar 25 23:49:59.122: INFO: Created: latency-svc-88x2p Mar 25 23:49:59.129: INFO: Got endpoints: latency-svc-88x2p [683.901266ms] Mar 25 23:49:59.167: INFO: Created: latency-svc-5vwz6 Mar 25 23:49:59.170: INFO: Got endpoints: latency-svc-5vwz6 [626.993036ms] Mar 25 23:49:59.188: INFO: Created: latency-svc-qmcqg Mar 25 23:49:59.200: INFO: Got endpoints: latency-svc-qmcqg [646.982516ms] Mar 25 23:49:59.218: INFO: Created: latency-svc-w7w8l Mar 25 23:49:59.230: INFO: Got endpoints: latency-svc-w7w8l [629.538034ms] Mar 25 23:49:59.254: INFO: Created: latency-svc-zs8c2 Mar 25 23:49:59.266: INFO: Got endpoints: latency-svc-zs8c2 [575.379994ms] Mar 25 23:49:59.305: INFO: Created: latency-svc-5pqtr Mar 25 23:49:59.308: INFO: Got endpoints: latency-svc-5pqtr [597.090955ms] Mar 25 23:49:59.329: INFO: Created: latency-svc-8nxll Mar 25 23:49:59.339: INFO: Got endpoints: latency-svc-8nxll [599.6684ms] Mar 25 23:49:59.362: INFO: Created: latency-svc-94frv Mar 25 23:49:59.373: INFO: Got endpoints: latency-svc-94frv [571.126877ms] Mar 25 23:49:59.398: INFO: Created: latency-svc-2j6ts Mar 25 23:49:59.431: INFO: Got endpoints: latency-svc-2j6ts [566.764298ms] Mar 25 23:49:59.434: INFO: Created: latency-svc-dgpfq Mar 25 23:49:59.469: INFO: Got endpoints: latency-svc-dgpfq [569.517082ms] Mar 25 23:49:59.491: INFO: Created: latency-svc-lljb6 Mar 25 23:49:59.515: INFO: Got endpoints: latency-svc-lljb6 [555.19307ms] Mar 25 23:49:59.569: INFO: Created: latency-svc-nlrnm Mar 25 23:49:59.577: INFO: Got endpoints: latency-svc-nlrnm [587.409682ms] Mar 25 23:49:59.596: INFO: Created: latency-svc-jcrfv Mar 25 23:49:59.607: INFO: Got endpoints: latency-svc-jcrfv [565.491461ms] Mar 25 23:49:59.620: INFO: Created: latency-svc-wp8bv Mar 25 23:49:59.645: INFO: Got endpoints: latency-svc-wp8bv [582.805973ms] Mar 25 23:49:59.700: INFO: Created: latency-svc-tlvq8 Mar 25 23:49:59.719: INFO: Got endpoints: latency-svc-tlvq8 [621.11018ms] Mar 25 23:49:59.719: INFO: Created: latency-svc-d7zfn Mar 25 23:49:59.740: INFO: Got endpoints: latency-svc-d7zfn [610.587864ms] Mar 25 23:49:59.754: INFO: Created: latency-svc-9ltph Mar 25 23:49:59.763: INFO: Got endpoints: latency-svc-9ltph [593.457306ms] Mar 25 23:49:59.795: INFO: Created: latency-svc-s26vk Mar 25 23:49:59.822: INFO: Got endpoints: latency-svc-s26vk [621.786402ms] Mar 25 23:49:59.830: INFO: Created: latency-svc-zdk86 Mar 25 23:49:59.842: INFO: Got endpoints: latency-svc-zdk86 [611.37812ms] Mar 25 23:49:59.862: INFO: Created: latency-svc-g5ql9 Mar 25 23:49:59.878: INFO: Got endpoints: latency-svc-g5ql9 [611.6052ms] Mar 25 23:49:59.900: INFO: Created: latency-svc-g8wbv Mar 25 23:49:59.912: INFO: Got endpoints: latency-svc-g8wbv [604.470236ms] Mar 25 23:49:59.958: INFO: Created: latency-svc-kf7sk Mar 25 23:49:59.966: INFO: Got endpoints: latency-svc-kf7sk [627.576743ms] Mar 25 23:49:59.986: INFO: Created: latency-svc-qchhz Mar 25 23:50:00.002: INFO: Got endpoints: latency-svc-qchhz [628.996433ms] Mar 25 23:50:00.016: INFO: Created: latency-svc-2nlzp Mar 25 23:50:00.026: INFO: Got endpoints: latency-svc-2nlzp [595.494516ms] Mar 25 23:50:00.040: INFO: Created: latency-svc-szhvd Mar 25 23:50:00.051: INFO: Got endpoints: latency-svc-szhvd [581.694194ms] Mar 25 23:50:00.102: INFO: Created: latency-svc-88jsw Mar 25 23:50:00.123: INFO: Got endpoints: latency-svc-88jsw [607.507593ms] Mar 25 23:50:00.139: INFO: Created: latency-svc-xpvkc Mar 25 23:50:00.152: INFO: Got endpoints: latency-svc-xpvkc [574.978872ms] Mar 25 23:50:00.175: INFO: Created: latency-svc-jm5mn Mar 25 23:50:00.189: INFO: Got endpoints: latency-svc-jm5mn [582.122852ms] Mar 25 23:50:00.234: INFO: Created: latency-svc-g8vfz Mar 25 23:50:00.237: INFO: Got endpoints: latency-svc-g8vfz [592.391786ms] Mar 25 23:50:00.257: INFO: Created: latency-svc-7msbq Mar 25 23:50:00.273: INFO: Got endpoints: latency-svc-7msbq [554.272046ms] Mar 25 23:50:00.295: INFO: Created: latency-svc-vcnnf Mar 25 23:50:00.309: INFO: Got endpoints: latency-svc-vcnnf [569.04181ms] Mar 25 23:50:00.324: INFO: Created: latency-svc-c8vnd Mar 25 23:50:00.377: INFO: Got endpoints: latency-svc-c8vnd [613.785424ms] Mar 25 23:50:00.377: INFO: Created: latency-svc-6rrm7 Mar 25 23:50:00.394: INFO: Got endpoints: latency-svc-6rrm7 [572.400319ms] Mar 25 23:50:00.395: INFO: Created: latency-svc-jn8rn Mar 25 23:50:00.411: INFO: Got endpoints: latency-svc-jn8rn [569.156797ms] Mar 25 23:50:00.430: INFO: Created: latency-svc-tmkzw Mar 25 23:50:00.440: INFO: Got endpoints: latency-svc-tmkzw [562.004526ms] Mar 25 23:50:00.473: INFO: Created: latency-svc-w9sks Mar 25 23:50:00.508: INFO: Got endpoints: latency-svc-w9sks [595.968055ms] Mar 25 23:50:00.547: INFO: Created: latency-svc-kdmjn Mar 25 23:50:00.560: INFO: Got endpoints: latency-svc-kdmjn [593.417025ms] Mar 25 23:50:00.577: INFO: Created: latency-svc-rb49c Mar 25 23:50:00.590: INFO: Got endpoints: latency-svc-rb49c [587.184504ms] Mar 25 23:50:00.607: INFO: Created: latency-svc-rhbjh Mar 25 23:50:00.652: INFO: Got endpoints: latency-svc-rhbjh [626.078609ms] Mar 25 23:50:00.670: INFO: Created: latency-svc-kq2b8 Mar 25 23:50:00.679: INFO: Got endpoints: latency-svc-kq2b8 [628.189784ms] Mar 25 23:50:00.694: INFO: Created: latency-svc-5wzxw Mar 25 23:50:00.704: INFO: Got endpoints: latency-svc-5wzxw [581.0274ms] Mar 25 23:50:00.720: INFO: Created: latency-svc-vnkpd Mar 25 23:50:00.744: INFO: Got endpoints: latency-svc-vnkpd [592.214681ms] Mar 25 23:50:00.796: INFO: Created: latency-svc-gm6b9 Mar 25 23:50:00.820: INFO: Got endpoints: latency-svc-gm6b9 [630.739373ms] Mar 25 23:50:00.820: INFO: Created: latency-svc-d7drj Mar 25 23:50:00.850: INFO: Got endpoints: latency-svc-d7drj [612.917161ms] Mar 25 23:50:00.928: INFO: Created: latency-svc-mk8qf Mar 25 23:50:00.949: INFO: Created: latency-svc-flkff Mar 25 23:50:00.949: INFO: Got endpoints: latency-svc-mk8qf [675.673691ms] Mar 25 23:50:00.962: INFO: Got endpoints: latency-svc-flkff [653.478121ms] Mar 25 23:50:01.006: INFO: Created: latency-svc-gmvg4 Mar 25 23:50:01.023: INFO: Got endpoints: latency-svc-gmvg4 [645.29486ms] Mar 25 23:50:01.054: INFO: Created: latency-svc-zhzmz Mar 25 23:50:01.058: INFO: Got endpoints: latency-svc-zhzmz [663.608835ms] Mar 25 23:50:01.078: INFO: Created: latency-svc-whq8v Mar 25 23:50:01.087: INFO: Got endpoints: latency-svc-whq8v [675.770213ms] Mar 25 23:50:01.104: INFO: Created: latency-svc-9mgq7 Mar 25 23:50:01.117: INFO: Got endpoints: latency-svc-9mgq7 [677.118481ms] Mar 25 23:50:01.135: INFO: Created: latency-svc-khjbp Mar 25 23:50:01.147: INFO: Got endpoints: latency-svc-khjbp [638.343446ms] Mar 25 23:50:01.191: INFO: Created: latency-svc-rhg4d Mar 25 23:50:01.201: INFO: Got endpoints: latency-svc-rhg4d [640.946846ms] Mar 25 23:50:01.222: INFO: Created: latency-svc-2gj94 Mar 25 23:50:01.246: INFO: Got endpoints: latency-svc-2gj94 [656.351219ms] Mar 25 23:50:01.264: INFO: Created: latency-svc-l8gvn Mar 25 23:50:01.272: INFO: Got endpoints: latency-svc-l8gvn [619.968583ms] Mar 25 23:50:01.288: INFO: Created: latency-svc-ntntg Mar 25 23:50:01.341: INFO: Got endpoints: latency-svc-ntntg [661.961563ms] Mar 25 23:50:01.343: INFO: Created: latency-svc-2kjcg Mar 25 23:50:01.374: INFO: Got endpoints: latency-svc-2kjcg [670.577361ms] Mar 25 23:50:01.408: INFO: Created: latency-svc-wc2v6 Mar 25 23:50:01.424: INFO: Got endpoints: latency-svc-wc2v6 [679.183015ms] Mar 25 23:50:01.462: INFO: Created: latency-svc-gd6n4 Mar 25 23:50:01.477: INFO: Got endpoints: latency-svc-gd6n4 [657.279216ms] Mar 25 23:50:01.494: INFO: Created: latency-svc-tknf4 Mar 25 23:50:01.519: INFO: Got endpoints: latency-svc-tknf4 [668.336495ms] Mar 25 23:50:01.543: INFO: Created: latency-svc-8427r Mar 25 23:50:01.555: INFO: Got endpoints: latency-svc-8427r [606.336333ms] Mar 25 23:50:01.588: INFO: Created: latency-svc-pf722 Mar 25 23:50:01.603: INFO: Got endpoints: latency-svc-pf722 [640.957395ms] Mar 25 23:50:01.624: INFO: Created: latency-svc-gmpjt Mar 25 23:50:01.639: INFO: Got endpoints: latency-svc-gmpjt [616.540662ms] Mar 25 23:50:01.672: INFO: Created: latency-svc-hk79f Mar 25 23:50:01.730: INFO: Got endpoints: latency-svc-hk79f [671.931247ms] Mar 25 23:50:01.731: INFO: Created: latency-svc-jhr5k Mar 25 23:50:01.740: INFO: Got endpoints: latency-svc-jhr5k [653.23165ms] Mar 25 23:50:01.758: INFO: Created: latency-svc-k8w8n Mar 25 23:50:01.776: INFO: Got endpoints: latency-svc-k8w8n [658.68074ms] Mar 25 23:50:01.807: INFO: Created: latency-svc-tslv5 Mar 25 23:50:01.868: INFO: Got endpoints: latency-svc-tslv5 [721.147223ms] Mar 25 23:50:01.868: INFO: Created: latency-svc-gmbkp Mar 25 23:50:01.872: INFO: Got endpoints: latency-svc-gmbkp [670.824601ms] Mar 25 23:50:01.888: INFO: Created: latency-svc-gr2qz Mar 25 23:50:01.902: INFO: Got endpoints: latency-svc-gr2qz [655.805869ms] Mar 25 23:50:01.920: INFO: Created: latency-svc-rkqs2 Mar 25 23:50:01.933: INFO: Got endpoints: latency-svc-rkqs2 [660.198445ms] Mar 25 23:50:01.951: INFO: Created: latency-svc-c2s5l Mar 25 23:50:01.963: INFO: Got endpoints: latency-svc-c2s5l [621.513383ms] Mar 25 23:50:02.018: INFO: Created: latency-svc-4l4fh Mar 25 23:50:02.039: INFO: Got endpoints: latency-svc-4l4fh [664.126696ms] Mar 25 23:50:02.039: INFO: Created: latency-svc-7jv5w Mar 25 23:50:02.062: INFO: Got endpoints: latency-svc-7jv5w [638.396267ms] Mar 25 23:50:02.086: INFO: Created: latency-svc-qmgpd Mar 25 23:50:02.101: INFO: Got endpoints: latency-svc-qmgpd [623.016251ms] Mar 25 23:50:02.150: INFO: Created: latency-svc-fj2vc Mar 25 23:50:02.167: INFO: Created: latency-svc-hlb4l Mar 25 23:50:02.167: INFO: Got endpoints: latency-svc-fj2vc [648.200523ms] Mar 25 23:50:02.188: INFO: Got endpoints: latency-svc-hlb4l [632.464741ms] Mar 25 23:50:02.218: INFO: Created: latency-svc-fkqzm Mar 25 23:50:02.237: INFO: Got endpoints: latency-svc-fkqzm [633.824098ms] Mar 25 23:50:02.287: INFO: Created: latency-svc-fkr2c Mar 25 23:50:02.329: INFO: Got endpoints: latency-svc-fkr2c [689.656998ms] Mar 25 23:50:02.329: INFO: Created: latency-svc-9jkh6 Mar 25 23:50:02.358: INFO: Got endpoints: latency-svc-9jkh6 [628.273817ms] Mar 25 23:50:02.422: INFO: Created: latency-svc-jkcf8 Mar 25 23:50:02.446: INFO: Got endpoints: latency-svc-jkcf8 [706.012251ms] Mar 25 23:50:02.447: INFO: Created: latency-svc-fcvth Mar 25 23:50:02.460: INFO: Got endpoints: latency-svc-fcvth [683.803922ms] Mar 25 23:50:02.501: INFO: Created: latency-svc-cx77r Mar 25 23:50:02.557: INFO: Got endpoints: latency-svc-cx77r [688.904762ms] Mar 25 23:50:02.569: INFO: Created: latency-svc-zrqqv Mar 25 23:50:02.585: INFO: Got endpoints: latency-svc-zrqqv [713.223635ms] Mar 25 23:50:02.605: INFO: Created: latency-svc-tccsn Mar 25 23:50:02.616: INFO: Got endpoints: latency-svc-tccsn [714.141544ms] Mar 25 23:50:02.628: INFO: Created: latency-svc-cqxlc Mar 25 23:50:02.650: INFO: Got endpoints: latency-svc-cqxlc [717.494994ms] Mar 25 23:50:02.693: INFO: Created: latency-svc-prcck Mar 25 23:50:02.706: INFO: Got endpoints: latency-svc-prcck [742.862477ms] Mar 25 23:50:02.722: INFO: Created: latency-svc-swl57 Mar 25 23:50:02.736: INFO: Got endpoints: latency-svc-swl57 [697.084736ms] Mar 25 23:50:02.760: INFO: Created: latency-svc-ls24s Mar 25 23:50:02.832: INFO: Got endpoints: latency-svc-ls24s [769.742142ms] Mar 25 23:50:02.860: INFO: Created: latency-svc-wshd4 Mar 25 23:50:02.874: INFO: Got endpoints: latency-svc-wshd4 [773.206171ms] Mar 25 23:50:02.897: INFO: Created: latency-svc-26npt Mar 25 23:50:02.920: INFO: Got endpoints: latency-svc-26npt [753.332934ms] Mar 25 23:50:02.976: INFO: Created: latency-svc-c6p2b Mar 25 23:50:02.993: INFO: Got endpoints: latency-svc-c6p2b [804.740216ms] Mar 25 23:50:03.012: INFO: Created: latency-svc-mghhg Mar 25 23:50:03.043: INFO: Got endpoints: latency-svc-mghhg [805.649909ms] Mar 25 23:50:03.095: INFO: Created: latency-svc-xg7nk Mar 25 23:50:03.124: INFO: Got endpoints: latency-svc-xg7nk [795.110928ms] Mar 25 23:50:03.149: INFO: Created: latency-svc-6hxjk Mar 25 23:50:03.160: INFO: Got endpoints: latency-svc-6hxjk [801.495194ms] Mar 25 23:50:03.181: INFO: Created: latency-svc-b4w8z Mar 25 23:50:03.233: INFO: Got endpoints: latency-svc-b4w8z [787.104381ms] Mar 25 23:50:03.246: INFO: Created: latency-svc-hcgs5 Mar 25 23:50:03.277: INFO: Got endpoints: latency-svc-hcgs5 [817.374792ms] Mar 25 23:50:03.305: INFO: Created: latency-svc-k8njw Mar 25 23:50:03.317: INFO: Got endpoints: latency-svc-k8njw [759.591894ms] Mar 25 23:50:03.419: INFO: Created: latency-svc-r4vls Mar 25 23:50:03.456: INFO: Got endpoints: latency-svc-r4vls [870.970603ms] Mar 25 23:50:03.457: INFO: Created: latency-svc-qjntw Mar 25 23:50:03.474: INFO: Got endpoints: latency-svc-qjntw [858.262072ms] Mar 25 23:50:03.490: INFO: Created: latency-svc-rt8qp Mar 25 23:50:03.503: INFO: Got endpoints: latency-svc-rt8qp [852.343528ms] Mar 25 23:50:03.545: INFO: Created: latency-svc-67c98 Mar 25 23:50:03.551: INFO: Got endpoints: latency-svc-67c98 [845.71735ms] Mar 25 23:50:03.568: INFO: Created: latency-svc-2bjlr Mar 25 23:50:03.594: INFO: Got endpoints: latency-svc-2bjlr [858.742139ms] Mar 25 23:50:03.606: INFO: Created: latency-svc-6qtjj Mar 25 23:50:03.617: INFO: Got endpoints: latency-svc-6qtjj [784.527453ms] Mar 25 23:50:03.636: INFO: Created: latency-svc-jg4sr Mar 25 23:50:03.670: INFO: Got endpoints: latency-svc-jg4sr [796.373569ms] Mar 25 23:50:03.688: INFO: Created: latency-svc-jk5lw Mar 25 23:50:03.706: INFO: Got endpoints: latency-svc-jk5lw [785.633591ms] Mar 25 23:50:03.718: INFO: Created: latency-svc-xd6mt Mar 25 23:50:03.729: INFO: Got endpoints: latency-svc-xd6mt [736.518278ms] Mar 25 23:50:03.744: INFO: Created: latency-svc-l4bb5 Mar 25 23:50:03.753: INFO: Got endpoints: latency-svc-l4bb5 [710.22779ms] Mar 25 23:50:03.814: INFO: Created: latency-svc-lql6s Mar 25 23:50:03.841: INFO: Created: latency-svc-wmffk Mar 25 23:50:03.841: INFO: Got endpoints: latency-svc-lql6s [716.735365ms] Mar 25 23:50:03.864: INFO: Got endpoints: latency-svc-wmffk [704.461894ms] Mar 25 23:50:03.892: INFO: Created: latency-svc-442rx Mar 25 23:50:03.903: INFO: Got endpoints: latency-svc-442rx [669.482399ms] Mar 25 23:50:03.934: INFO: Created: latency-svc-tzgjh Mar 25 23:50:03.940: INFO: Got endpoints: latency-svc-tzgjh [662.757375ms] Mar 25 23:50:03.959: INFO: Created: latency-svc-7hzkk Mar 25 23:50:03.982: INFO: Got endpoints: latency-svc-7hzkk [665.142707ms] Mar 25 23:50:04.002: INFO: Created: latency-svc-zvsfd Mar 25 23:50:04.024: INFO: Got endpoints: latency-svc-zvsfd [567.703112ms] Mar 25 23:50:04.060: INFO: Created: latency-svc-dfcdn Mar 25 23:50:04.066: INFO: Got endpoints: latency-svc-dfcdn [591.409141ms] Mar 25 23:50:04.084: INFO: Created: latency-svc-5z9rd Mar 25 23:50:04.096: INFO: Got endpoints: latency-svc-5z9rd [592.86835ms] Mar 25 23:50:04.114: INFO: Created: latency-svc-k625p Mar 25 23:50:04.126: INFO: Got endpoints: latency-svc-k625p [574.417027ms] Mar 25 23:50:04.144: INFO: Created: latency-svc-hvpdd Mar 25 23:50:04.156: INFO: Got endpoints: latency-svc-hvpdd [561.756204ms] Mar 25 23:50:04.203: INFO: Created: latency-svc-j7gn2 Mar 25 23:50:04.224: INFO: Got endpoints: latency-svc-j7gn2 [607.500651ms] Mar 25 23:50:04.255: INFO: Created: latency-svc-5q2bj Mar 25 23:50:04.268: INFO: Got endpoints: latency-svc-5q2bj [597.88443ms] Mar 25 23:50:04.294: INFO: Created: latency-svc-k8hmq Mar 25 23:50:04.329: INFO: Got endpoints: latency-svc-k8hmq [623.226508ms] Mar 25 23:50:04.336: INFO: Created: latency-svc-flscx Mar 25 23:50:04.352: INFO: Got endpoints: latency-svc-flscx [623.005453ms] Mar 25 23:50:04.368: INFO: Created: latency-svc-7d6rl Mar 25 23:50:04.382: INFO: Got endpoints: latency-svc-7d6rl [628.990273ms] Mar 25 23:50:04.398: INFO: Created: latency-svc-gccb9 Mar 25 23:50:04.413: INFO: Got endpoints: latency-svc-gccb9 [571.66595ms] Mar 25 23:50:04.428: INFO: Created: latency-svc-csngv Mar 25 23:50:04.473: INFO: Got endpoints: latency-svc-csngv [608.437263ms] Mar 25 23:50:04.492: INFO: Created: latency-svc-zd9nn Mar 25 23:50:04.522: INFO: Got endpoints: latency-svc-zd9nn [618.813643ms] Mar 25 23:50:04.605: INFO: Created: latency-svc-h2gdp Mar 25 23:50:04.626: INFO: Created: latency-svc-q2h6c Mar 25 23:50:04.627: INFO: Got endpoints: latency-svc-h2gdp [687.027553ms] Mar 25 23:50:04.651: INFO: Got endpoints: latency-svc-q2h6c [668.670321ms] Mar 25 23:50:04.669: INFO: Created: latency-svc-b7mps Mar 25 23:50:04.677: INFO: Got endpoints: latency-svc-b7mps [653.500379ms] Mar 25 23:50:04.696: INFO: Created: latency-svc-qxp5p Mar 25 23:50:04.760: INFO: Got endpoints: latency-svc-qxp5p [694.315471ms] Mar 25 23:50:04.761: INFO: Created: latency-svc-trpzp Mar 25 23:50:04.782: INFO: Got endpoints: latency-svc-trpzp [686.710005ms] Mar 25 23:50:04.853: INFO: Created: latency-svc-7f772 Mar 25 23:50:04.934: INFO: Got endpoints: latency-svc-7f772 [807.924066ms] Mar 25 23:50:04.935: INFO: Created: latency-svc-4qljw Mar 25 23:50:04.945: INFO: Got endpoints: latency-svc-4qljw [788.82815ms] Mar 25 23:50:04.963: INFO: Created: latency-svc-vhkpx Mar 25 23:50:04.984: INFO: Got endpoints: latency-svc-vhkpx [760.005703ms] Mar 25 23:50:05.005: INFO: Created: latency-svc-wmtwr Mar 25 23:50:05.023: INFO: Got endpoints: latency-svc-wmtwr [755.271984ms] Mar 25 23:50:05.072: INFO: Created: latency-svc-z6mwl Mar 25 23:50:05.077: INFO: Got endpoints: latency-svc-z6mwl [747.601273ms] Mar 25 23:50:05.077: INFO: Latencies: [82.346992ms 112.231493ms 170.822994ms 202.236206ms 231.104135ms 254.597273ms 323.457532ms 380.740744ms 418.72248ms 470.474936ms 506.606181ms 554.272046ms 555.19307ms 556.242896ms 561.756204ms 562.004526ms 565.491461ms 566.764298ms 567.703112ms 569.04181ms 569.156797ms 569.517082ms 571.126877ms 571.66595ms 572.400319ms 574.417027ms 574.978872ms 575.379994ms 581.0274ms 581.694194ms 582.122852ms 582.805973ms 587.184504ms 587.409682ms 591.409141ms 592.214681ms 592.391786ms 592.86835ms 593.417025ms 593.457306ms 595.494516ms 595.968055ms 597.090955ms 597.88443ms 599.6684ms 604.470236ms 606.336333ms 607.500651ms 607.507593ms 608.437263ms 610.587864ms 611.37812ms 611.6052ms 612.917161ms 613.785424ms 616.540662ms 618.813643ms 619.968583ms 621.11018ms 621.513383ms 621.786402ms 623.005453ms 623.016251ms 623.226508ms 625.717267ms 626.078609ms 626.993036ms 627.576743ms 628.189784ms 628.273817ms 628.990273ms 628.996433ms 629.538034ms 630.739373ms 632.464741ms 633.824098ms 638.343446ms 638.396267ms 639.278249ms 640.946846ms 640.957395ms 645.29486ms 646.982516ms 648.200523ms 653.23165ms 653.478121ms 653.500379ms 655.805869ms 656.351219ms 657.279216ms 658.68074ms 660.198445ms 661.961563ms 662.757375ms 663.608835ms 664.126696ms 665.142707ms 668.336495ms 668.670321ms 669.482399ms 670.577361ms 670.824601ms 671.931247ms 675.673691ms 675.770213ms 677.118481ms 679.183015ms 683.803922ms 683.901266ms 686.710005ms 687.027553ms 688.904762ms 689.656998ms 694.315471ms 697.084736ms 701.266363ms 703.383782ms 704.461894ms 706.012251ms 706.474453ms 710.22779ms 713.223635ms 714.141544ms 716.735365ms 717.494994ms 721.147223ms 729.786225ms 733.181725ms 736.518278ms 742.862477ms 743.202977ms 746.646386ms 747.601273ms 748.736893ms 749.118223ms 753.332934ms 753.348241ms 755.271984ms 759.591894ms 760.005703ms 761.097493ms 766.668662ms 766.771425ms 766.777637ms 769.742142ms 772.912409ms 773.036496ms 773.206171ms 775.289042ms 776.893498ms 778.688645ms 784.420626ms 784.527453ms 785.633591ms 786.331083ms 787.104381ms 788.319384ms 788.82815ms 792.916689ms 793.116652ms 795.110928ms 796.373569ms 800.283273ms 801.495194ms 804.740216ms 805.649909ms 807.162661ms 807.924066ms 808.722241ms 808.974681ms 817.374792ms 818.091847ms 823.493543ms 826.736057ms 826.889933ms 827.979358ms 831.779032ms 832.034058ms 833.327601ms 834.271511ms 834.305333ms 836.684829ms 838.542833ms 839.397551ms 842.999786ms 845.71735ms 851.462908ms 852.343528ms 854.037415ms 856.647241ms 858.262072ms 858.742139ms 861.988382ms 862.678323ms 863.175257ms 867.449082ms 870.440425ms 870.970603ms 886.252365ms 896.306246ms] Mar 25 23:50:05.077: INFO: 50 %ile: 670.577361ms Mar 25 23:50:05.077: INFO: 90 %ile: 834.305333ms Mar 25 23:50:05.077: INFO: 99 %ile: 886.252365ms Mar 25 23:50:05.077: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:05.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5459" for this suite. • [SLOW TEST:13.876 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":48,"skipped":887,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:05.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:09.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5644" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":49,"skipped":899,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:09.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 25 23:50:09.428: INFO: Waiting up to 5m0s for pod "downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6" in namespace "downward-api-8349" to be "Succeeded or Failed" Mar 25 23:50:09.641: INFO: Pod "downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 212.679915ms Mar 25 23:50:11.647: INFO: Pod "downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218909997s Mar 25 23:50:13.658: INFO: Pod "downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.229817234s STEP: Saw pod success Mar 25 23:50:13.658: INFO: Pod "downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6" satisfied condition "Succeeded or Failed" Mar 25 23:50:13.664: INFO: Trying to get logs from node latest-worker2 pod downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6 container dapi-container: STEP: delete the pod Mar 25 23:50:13.701: INFO: Waiting for pod downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6 to disappear Mar 25 23:50:13.706: INFO: Pod downward-api-fce4dc26-0504-4903-bf16-a4bd8942d2f6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:13.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8349" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":906,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:13.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 25 23:50:13.815: INFO: Waiting up to 5m0s for pod "pod-a252c538-7209-4f98-85ed-96f569c4a721" in namespace "emptydir-8013" to be "Succeeded or Failed" Mar 25 23:50:13.831: INFO: Pod "pod-a252c538-7209-4f98-85ed-96f569c4a721": Phase="Pending", Reason="", readiness=false. Elapsed: 15.425204ms Mar 25 23:50:15.899: INFO: Pod "pod-a252c538-7209-4f98-85ed-96f569c4a721": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083417068s Mar 25 23:50:17.901: INFO: Pod "pod-a252c538-7209-4f98-85ed-96f569c4a721": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086099999s STEP: Saw pod success Mar 25 23:50:17.902: INFO: Pod "pod-a252c538-7209-4f98-85ed-96f569c4a721" satisfied condition "Succeeded or Failed" Mar 25 23:50:17.912: INFO: Trying to get logs from node latest-worker2 pod pod-a252c538-7209-4f98-85ed-96f569c4a721 container test-container: STEP: delete the pod Mar 25 23:50:18.014: INFO: Waiting for pod pod-a252c538-7209-4f98-85ed-96f569c4a721 to disappear Mar 25 23:50:18.019: INFO: Pod pod-a252c538-7209-4f98-85ed-96f569c4a721 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:18.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8013" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":920,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:18.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:50:18.189: INFO: Creating ReplicaSet my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a Mar 25 23:50:18.217: INFO: Pod name my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a: Found 0 pods out of 1 Mar 25 23:50:23.226: INFO: Pod name my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a: Found 1 pods out of 1 Mar 25 23:50:23.226: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a" is running Mar 25 23:50:23.243: INFO: Pod "my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a-ljtvg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 23:50:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 23:50:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 23:50:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 23:50:18 +0000 UTC Reason: Message:}]) Mar 25 23:50:23.244: INFO: Trying to dial the pod Mar 25 23:50:28.256: INFO: Controller my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a: Got expected result from replica 1 [my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a-ljtvg]: "my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a-ljtvg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:28.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3975" for this suite. • [SLOW TEST:10.206 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":52,"skipped":924,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:28.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 25 23:50:28.332: INFO: Waiting up to 5m0s for pod "client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02" in namespace "containers-1705" to be "Succeeded or Failed" Mar 25 23:50:28.389: INFO: Pod "client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02": Phase="Pending", Reason="", readiness=false. Elapsed: 57.405521ms Mar 25 23:50:30.393: INFO: Pod "client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061043579s Mar 25 23:50:32.397: INFO: Pod "client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064965961s STEP: Saw pod success Mar 25 23:50:32.397: INFO: Pod "client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02" satisfied condition "Succeeded or Failed" Mar 25 23:50:32.400: INFO: Trying to get logs from node latest-worker2 pod client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02 container test-container: STEP: delete the pod Mar 25 23:50:32.450: INFO: Waiting for pod client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02 to disappear Mar 25 23:50:32.478: INFO: Pod client-containers-60303693-6bd8-4e84-ad3c-ff74ca667d02 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:32.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1705" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":927,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:32.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 25 23:50:32.579: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 23:50:32.593: INFO: Waiting for terminating namespaces to be deleted... Mar 25 23:50:32.595: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 25 23:50:32.615: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:50:32.615: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 23:50:32.615: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:50:32.615: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 23:50:32.615: INFO: svc-latency-rc-lxhrn from svc-latency-5459 started at 2020-03-25 23:49:51 +0000 UTC (1 container statuses recorded) Mar 25 23:50:32.615: INFO: Container svc-latency-rc ready: false, restart count 0 Mar 25 23:50:32.615: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 25 23:50:32.621: INFO: my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a-ljtvg from replicaset-3975 started at 2020-03-25 23:50:18 +0000 UTC (1 container statuses recorded) Mar 25 23:50:32.622: INFO: Container my-hostname-basic-74fe342e-fc4c-4373-9909-e526ca58a85a ready: true, restart count 0 Mar 25 23:50:32.622: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:50:32.622: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 23:50:32.622: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:50:32.622: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ffb105e0ad98e6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ffb105e5ff040c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:33.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4886" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":54,"skipped":931,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:33.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 25 23:50:43.825: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:43.825: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:43.861852 7 log.go:172] (0xc0031cb6b0) (0xc001435c20) Create stream I0325 23:50:43.861883 7 log.go:172] (0xc0031cb6b0) (0xc001435c20) Stream added, broadcasting: 1 I0325 23:50:43.868518 7 log.go:172] (0xc0031cb6b0) Reply frame received for 1 I0325 23:50:43.868559 7 log.go:172] (0xc0031cb6b0) (0xc0012c9720) Create stream I0325 23:50:43.868582 7 log.go:172] (0xc0031cb6b0) (0xc0012c9720) Stream added, broadcasting: 3 I0325 23:50:43.870022 7 log.go:172] (0xc0031cb6b0) Reply frame received for 3 I0325 23:50:43.870043 7 log.go:172] (0xc0031cb6b0) (0xc0012c97c0) Create stream I0325 23:50:43.870057 7 log.go:172] (0xc0031cb6b0) (0xc0012c97c0) Stream added, broadcasting: 5 I0325 23:50:43.870932 7 log.go:172] (0xc0031cb6b0) Reply frame received for 5 I0325 23:50:43.956136 7 log.go:172] (0xc0031cb6b0) Data frame received for 3 I0325 23:50:43.956164 7 log.go:172] (0xc0012c9720) (3) Data frame handling I0325 23:50:43.956183 7 log.go:172] (0xc0012c9720) (3) Data frame sent I0325 23:50:43.956192 7 log.go:172] (0xc0031cb6b0) Data frame received for 3 I0325 23:50:43.956205 7 log.go:172] (0xc0012c9720) (3) Data frame handling I0325 23:50:43.956704 7 log.go:172] (0xc0031cb6b0) Data frame received for 5 I0325 23:50:43.956741 7 log.go:172] (0xc0012c97c0) (5) Data frame handling I0325 23:50:43.958222 7 log.go:172] (0xc0031cb6b0) Data frame received for 1 I0325 23:50:43.958275 7 log.go:172] (0xc001435c20) (1) Data frame handling I0325 23:50:43.958328 7 log.go:172] (0xc001435c20) (1) Data frame sent I0325 23:50:43.958356 7 log.go:172] (0xc0031cb6b0) (0xc001435c20) Stream removed, broadcasting: 1 I0325 23:50:43.958375 7 log.go:172] (0xc0031cb6b0) Go away received I0325 23:50:43.958494 7 log.go:172] (0xc0031cb6b0) (0xc001435c20) Stream removed, broadcasting: 1 I0325 23:50:43.958524 7 log.go:172] (0xc0031cb6b0) (0xc0012c9720) Stream removed, broadcasting: 3 I0325 23:50:43.958548 7 log.go:172] (0xc0031cb6b0) (0xc0012c97c0) Stream removed, broadcasting: 5 Mar 25 23:50:43.958: INFO: Exec stderr: "" Mar 25 23:50:43.958: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:43.958: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:43.989232 7 log.go:172] (0xc0031cbce0) (0xc001435f40) Create stream I0325 23:50:43.989260 7 log.go:172] (0xc0031cbce0) (0xc001435f40) Stream added, broadcasting: 1 I0325 23:50:43.991029 7 log.go:172] (0xc0031cbce0) Reply frame received for 1 I0325 23:50:43.991070 7 log.go:172] (0xc0031cbce0) (0xc001b48d20) Create stream I0325 23:50:43.991089 7 log.go:172] (0xc0031cbce0) (0xc001b48d20) Stream added, broadcasting: 3 I0325 23:50:43.991996 7 log.go:172] (0xc0031cbce0) Reply frame received for 3 I0325 23:50:43.992033 7 log.go:172] (0xc0031cbce0) (0xc0012c9860) Create stream I0325 23:50:43.992048 7 log.go:172] (0xc0031cbce0) (0xc0012c9860) Stream added, broadcasting: 5 I0325 23:50:43.992976 7 log.go:172] (0xc0031cbce0) Reply frame received for 5 I0325 23:50:44.047405 7 log.go:172] (0xc0031cbce0) Data frame received for 3 I0325 23:50:44.047520 7 log.go:172] (0xc001b48d20) (3) Data frame handling I0325 23:50:44.047596 7 log.go:172] (0xc001b48d20) (3) Data frame sent I0325 23:50:44.047629 7 log.go:172] (0xc0031cbce0) Data frame received for 3 I0325 23:50:44.047662 7 log.go:172] (0xc001b48d20) (3) Data frame handling I0325 23:50:44.047687 7 log.go:172] (0xc0031cbce0) Data frame received for 5 I0325 23:50:44.047712 7 log.go:172] (0xc0012c9860) (5) Data frame handling I0325 23:50:44.049641 7 log.go:172] (0xc0031cbce0) Data frame received for 1 I0325 23:50:44.049665 7 log.go:172] (0xc001435f40) (1) Data frame handling I0325 23:50:44.049678 7 log.go:172] (0xc001435f40) (1) Data frame sent I0325 23:50:44.049691 7 log.go:172] (0xc0031cbce0) (0xc001435f40) Stream removed, broadcasting: 1 I0325 23:50:44.049713 7 log.go:172] (0xc0031cbce0) Go away received I0325 23:50:44.049841 7 log.go:172] (0xc0031cbce0) (0xc001435f40) Stream removed, broadcasting: 1 I0325 23:50:44.049922 7 log.go:172] (0xc0031cbce0) (0xc001b48d20) Stream removed, broadcasting: 3 I0325 23:50:44.049955 7 log.go:172] (0xc0031cbce0) (0xc0012c9860) Stream removed, broadcasting: 5 Mar 25 23:50:44.049: INFO: Exec stderr: "" Mar 25 23:50:44.050: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.050: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.081476 7 log.go:172] (0xc002a8ce70) (0xc001b49180) Create stream I0325 23:50:44.081522 7 log.go:172] (0xc002a8ce70) (0xc001b49180) Stream added, broadcasting: 1 I0325 23:50:44.084004 7 log.go:172] (0xc002a8ce70) Reply frame received for 1 I0325 23:50:44.084036 7 log.go:172] (0xc002a8ce70) (0xc0012c9900) Create stream I0325 23:50:44.084055 7 log.go:172] (0xc002a8ce70) (0xc0012c9900) Stream added, broadcasting: 3 I0325 23:50:44.085000 7 log.go:172] (0xc002a8ce70) Reply frame received for 3 I0325 23:50:44.085027 7 log.go:172] (0xc002a8ce70) (0xc0011b80a0) Create stream I0325 23:50:44.085034 7 log.go:172] (0xc002a8ce70) (0xc0011b80a0) Stream added, broadcasting: 5 I0325 23:50:44.086093 7 log.go:172] (0xc002a8ce70) Reply frame received for 5 I0325 23:50:44.155937 7 log.go:172] (0xc002a8ce70) Data frame received for 5 I0325 23:50:44.155982 7 log.go:172] (0xc0011b80a0) (5) Data frame handling I0325 23:50:44.156006 7 log.go:172] (0xc002a8ce70) Data frame received for 3 I0325 23:50:44.156017 7 log.go:172] (0xc0012c9900) (3) Data frame handling I0325 23:50:44.156027 7 log.go:172] (0xc0012c9900) (3) Data frame sent I0325 23:50:44.156033 7 log.go:172] (0xc002a8ce70) Data frame received for 3 I0325 23:50:44.156042 7 log.go:172] (0xc0012c9900) (3) Data frame handling I0325 23:50:44.157078 7 log.go:172] (0xc002a8ce70) Data frame received for 1 I0325 23:50:44.157088 7 log.go:172] (0xc001b49180) (1) Data frame handling I0325 23:50:44.157108 7 log.go:172] (0xc001b49180) (1) Data frame sent I0325 23:50:44.157244 7 log.go:172] (0xc002a8ce70) (0xc001b49180) Stream removed, broadcasting: 1 I0325 23:50:44.157272 7 log.go:172] (0xc002a8ce70) Go away received I0325 23:50:44.157329 7 log.go:172] (0xc002a8ce70) (0xc001b49180) Stream removed, broadcasting: 1 I0325 23:50:44.157343 7 log.go:172] (0xc002a8ce70) (0xc0012c9900) Stream removed, broadcasting: 3 I0325 23:50:44.157352 7 log.go:172] (0xc002a8ce70) (0xc0011b80a0) Stream removed, broadcasting: 5 Mar 25 23:50:44.157: INFO: Exec stderr: "" Mar 25 23:50:44.157: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.157: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.180878 7 log.go:172] (0xc002c26420) (0xc0011b8780) Create stream I0325 23:50:44.180909 7 log.go:172] (0xc002c26420) (0xc0011b8780) Stream added, broadcasting: 1 I0325 23:50:44.183563 7 log.go:172] (0xc002c26420) Reply frame received for 1 I0325 23:50:44.183591 7 log.go:172] (0xc002c26420) (0xc0011b8820) Create stream I0325 23:50:44.183603 7 log.go:172] (0xc002c26420) (0xc0011b8820) Stream added, broadcasting: 3 I0325 23:50:44.184375 7 log.go:172] (0xc002c26420) Reply frame received for 3 I0325 23:50:44.184406 7 log.go:172] (0xc002c26420) (0xc0012c99a0) Create stream I0325 23:50:44.184420 7 log.go:172] (0xc002c26420) (0xc0012c99a0) Stream added, broadcasting: 5 I0325 23:50:44.185419 7 log.go:172] (0xc002c26420) Reply frame received for 5 I0325 23:50:44.240752 7 log.go:172] (0xc002c26420) Data frame received for 5 I0325 23:50:44.240797 7 log.go:172] (0xc0012c99a0) (5) Data frame handling I0325 23:50:44.240825 7 log.go:172] (0xc002c26420) Data frame received for 3 I0325 23:50:44.240840 7 log.go:172] (0xc0011b8820) (3) Data frame handling I0325 23:50:44.240884 7 log.go:172] (0xc0011b8820) (3) Data frame sent I0325 23:50:44.240899 7 log.go:172] (0xc002c26420) Data frame received for 3 I0325 23:50:44.240915 7 log.go:172] (0xc0011b8820) (3) Data frame handling I0325 23:50:44.242624 7 log.go:172] (0xc002c26420) Data frame received for 1 I0325 23:50:44.242673 7 log.go:172] (0xc0011b8780) (1) Data frame handling I0325 23:50:44.242703 7 log.go:172] (0xc0011b8780) (1) Data frame sent I0325 23:50:44.242743 7 log.go:172] (0xc002c26420) (0xc0011b8780) Stream removed, broadcasting: 1 I0325 23:50:44.242862 7 log.go:172] (0xc002c26420) (0xc0011b8780) Stream removed, broadcasting: 1 I0325 23:50:44.242900 7 log.go:172] (0xc002c26420) (0xc0011b8820) Stream removed, broadcasting: 3 I0325 23:50:44.242926 7 log.go:172] (0xc002c26420) (0xc0012c99a0) Stream removed, broadcasting: 5 Mar 25 23:50:44.242: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 25 23:50:44.243: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.243: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.244913 7 log.go:172] (0xc002c26420) Go away received I0325 23:50:44.274459 7 log.go:172] (0xc002312370) (0xc001406b40) Create stream I0325 23:50:44.274493 7 log.go:172] (0xc002312370) (0xc001406b40) Stream added, broadcasting: 1 I0325 23:50:44.277661 7 log.go:172] (0xc002312370) Reply frame received for 1 I0325 23:50:44.277720 7 log.go:172] (0xc002312370) (0xc001406d20) Create stream I0325 23:50:44.277755 7 log.go:172] (0xc002312370) (0xc001406d20) Stream added, broadcasting: 3 I0325 23:50:44.278634 7 log.go:172] (0xc002312370) Reply frame received for 3 I0325 23:50:44.278670 7 log.go:172] (0xc002312370) (0xc0012c9a40) Create stream I0325 23:50:44.278685 7 log.go:172] (0xc002312370) (0xc0012c9a40) Stream added, broadcasting: 5 I0325 23:50:44.279766 7 log.go:172] (0xc002312370) Reply frame received for 5 I0325 23:50:44.340491 7 log.go:172] (0xc002312370) Data frame received for 5 I0325 23:50:44.340555 7 log.go:172] (0xc0012c9a40) (5) Data frame handling I0325 23:50:44.340609 7 log.go:172] (0xc002312370) Data frame received for 3 I0325 23:50:44.340639 7 log.go:172] (0xc001406d20) (3) Data frame handling I0325 23:50:44.340667 7 log.go:172] (0xc001406d20) (3) Data frame sent I0325 23:50:44.340686 7 log.go:172] (0xc002312370) Data frame received for 3 I0325 23:50:44.340698 7 log.go:172] (0xc001406d20) (3) Data frame handling I0325 23:50:44.342094 7 log.go:172] (0xc002312370) Data frame received for 1 I0325 23:50:44.342137 7 log.go:172] (0xc001406b40) (1) Data frame handling I0325 23:50:44.342155 7 log.go:172] (0xc001406b40) (1) Data frame sent I0325 23:50:44.342166 7 log.go:172] (0xc002312370) (0xc001406b40) Stream removed, broadcasting: 1 I0325 23:50:44.342233 7 log.go:172] (0xc002312370) (0xc001406b40) Stream removed, broadcasting: 1 I0325 23:50:44.342252 7 log.go:172] (0xc002312370) (0xc001406d20) Stream removed, broadcasting: 3 I0325 23:50:44.342314 7 log.go:172] (0xc002312370) Go away received I0325 23:50:44.342344 7 log.go:172] (0xc002312370) (0xc0012c9a40) Stream removed, broadcasting: 5 Mar 25 23:50:44.342: INFO: Exec stderr: "" Mar 25 23:50:44.342: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.342: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.371825 7 log.go:172] (0xc002a8d4a0) (0xc001b49400) Create stream I0325 23:50:44.371846 7 log.go:172] (0xc002a8d4a0) (0xc001b49400) Stream added, broadcasting: 1 I0325 23:50:44.374278 7 log.go:172] (0xc002a8d4a0) Reply frame received for 1 I0325 23:50:44.374329 7 log.go:172] (0xc002a8d4a0) (0xc0011b8960) Create stream I0325 23:50:44.374351 7 log.go:172] (0xc002a8d4a0) (0xc0011b8960) Stream added, broadcasting: 3 I0325 23:50:44.375456 7 log.go:172] (0xc002a8d4a0) Reply frame received for 3 I0325 23:50:44.375474 7 log.go:172] (0xc002a8d4a0) (0xc0012c9b80) Create stream I0325 23:50:44.375481 7 log.go:172] (0xc002a8d4a0) (0xc0012c9b80) Stream added, broadcasting: 5 I0325 23:50:44.376504 7 log.go:172] (0xc002a8d4a0) Reply frame received for 5 I0325 23:50:44.433605 7 log.go:172] (0xc002a8d4a0) Data frame received for 5 I0325 23:50:44.433637 7 log.go:172] (0xc0012c9b80) (5) Data frame handling I0325 23:50:44.433671 7 log.go:172] (0xc002a8d4a0) Data frame received for 3 I0325 23:50:44.433701 7 log.go:172] (0xc0011b8960) (3) Data frame handling I0325 23:50:44.433724 7 log.go:172] (0xc0011b8960) (3) Data frame sent I0325 23:50:44.433747 7 log.go:172] (0xc002a8d4a0) Data frame received for 3 I0325 23:50:44.433762 7 log.go:172] (0xc0011b8960) (3) Data frame handling I0325 23:50:44.434780 7 log.go:172] (0xc002a8d4a0) Data frame received for 1 I0325 23:50:44.434814 7 log.go:172] (0xc001b49400) (1) Data frame handling I0325 23:50:44.434832 7 log.go:172] (0xc001b49400) (1) Data frame sent I0325 23:50:44.434844 7 log.go:172] (0xc002a8d4a0) (0xc001b49400) Stream removed, broadcasting: 1 I0325 23:50:44.434903 7 log.go:172] (0xc002a8d4a0) (0xc001b49400) Stream removed, broadcasting: 1 I0325 23:50:44.434943 7 log.go:172] (0xc002a8d4a0) (0xc0011b8960) Stream removed, broadcasting: 3 I0325 23:50:44.435081 7 log.go:172] (0xc002a8d4a0) (0xc0012c9b80) Stream removed, broadcasting: 5 I0325 23:50:44.435133 7 log.go:172] (0xc002a8d4a0) Go away received Mar 25 23:50:44.435: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 25 23:50:44.435: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.435: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.467308 7 log.go:172] (0xc002deaa50) (0xc000efc000) Create stream I0325 23:50:44.467355 7 log.go:172] (0xc002deaa50) (0xc000efc000) Stream added, broadcasting: 1 I0325 23:50:44.470135 7 log.go:172] (0xc002deaa50) Reply frame received for 1 I0325 23:50:44.470160 7 log.go:172] (0xc002deaa50) (0xc001406dc0) Create stream I0325 23:50:44.470167 7 log.go:172] (0xc002deaa50) (0xc001406dc0) Stream added, broadcasting: 3 I0325 23:50:44.471185 7 log.go:172] (0xc002deaa50) Reply frame received for 3 I0325 23:50:44.471254 7 log.go:172] (0xc002deaa50) (0xc0011b8a00) Create stream I0325 23:50:44.471271 7 log.go:172] (0xc002deaa50) (0xc0011b8a00) Stream added, broadcasting: 5 I0325 23:50:44.472110 7 log.go:172] (0xc002deaa50) Reply frame received for 5 I0325 23:50:44.540226 7 log.go:172] (0xc002deaa50) Data frame received for 5 I0325 23:50:44.540279 7 log.go:172] (0xc002deaa50) Data frame received for 3 I0325 23:50:44.540337 7 log.go:172] (0xc001406dc0) (3) Data frame handling I0325 23:50:44.540368 7 log.go:172] (0xc001406dc0) (3) Data frame sent I0325 23:50:44.540398 7 log.go:172] (0xc002deaa50) Data frame received for 3 I0325 23:50:44.540422 7 log.go:172] (0xc001406dc0) (3) Data frame handling I0325 23:50:44.540463 7 log.go:172] (0xc0011b8a00) (5) Data frame handling I0325 23:50:44.542163 7 log.go:172] (0xc002deaa50) Data frame received for 1 I0325 23:50:44.542181 7 log.go:172] (0xc000efc000) (1) Data frame handling I0325 23:50:44.542190 7 log.go:172] (0xc000efc000) (1) Data frame sent I0325 23:50:44.542207 7 log.go:172] (0xc002deaa50) (0xc000efc000) Stream removed, broadcasting: 1 I0325 23:50:44.542240 7 log.go:172] (0xc002deaa50) Go away received I0325 23:50:44.542281 7 log.go:172] (0xc002deaa50) (0xc000efc000) Stream removed, broadcasting: 1 I0325 23:50:44.542295 7 log.go:172] (0xc002deaa50) (0xc001406dc0) Stream removed, broadcasting: 3 I0325 23:50:44.542301 7 log.go:172] (0xc002deaa50) (0xc0011b8a00) Stream removed, broadcasting: 5 Mar 25 23:50:44.542: INFO: Exec stderr: "" Mar 25 23:50:44.542: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.542: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.577233 7 log.go:172] (0xc002deb1e0) (0xc000efc500) Create stream I0325 23:50:44.577257 7 log.go:172] (0xc002deb1e0) (0xc000efc500) Stream added, broadcasting: 1 I0325 23:50:44.579773 7 log.go:172] (0xc002deb1e0) Reply frame received for 1 I0325 23:50:44.579812 7 log.go:172] (0xc002deb1e0) (0xc001b494a0) Create stream I0325 23:50:44.579830 7 log.go:172] (0xc002deb1e0) (0xc001b494a0) Stream added, broadcasting: 3 I0325 23:50:44.580815 7 log.go:172] (0xc002deb1e0) Reply frame received for 3 I0325 23:50:44.580861 7 log.go:172] (0xc002deb1e0) (0xc000b7c140) Create stream I0325 23:50:44.580876 7 log.go:172] (0xc002deb1e0) (0xc000b7c140) Stream added, broadcasting: 5 I0325 23:50:44.582085 7 log.go:172] (0xc002deb1e0) Reply frame received for 5 I0325 23:50:44.637073 7 log.go:172] (0xc002deb1e0) Data frame received for 5 I0325 23:50:44.637270 7 log.go:172] (0xc000b7c140) (5) Data frame handling I0325 23:50:44.637331 7 log.go:172] (0xc002deb1e0) Data frame received for 3 I0325 23:50:44.637361 7 log.go:172] (0xc001b494a0) (3) Data frame handling I0325 23:50:44.637387 7 log.go:172] (0xc001b494a0) (3) Data frame sent I0325 23:50:44.637411 7 log.go:172] (0xc002deb1e0) Data frame received for 3 I0325 23:50:44.637431 7 log.go:172] (0xc001b494a0) (3) Data frame handling I0325 23:50:44.638632 7 log.go:172] (0xc002deb1e0) Data frame received for 1 I0325 23:50:44.638646 7 log.go:172] (0xc000efc500) (1) Data frame handling I0325 23:50:44.638652 7 log.go:172] (0xc000efc500) (1) Data frame sent I0325 23:50:44.638660 7 log.go:172] (0xc002deb1e0) (0xc000efc500) Stream removed, broadcasting: 1 I0325 23:50:44.638711 7 log.go:172] (0xc002deb1e0) Go away received I0325 23:50:44.638757 7 log.go:172] (0xc002deb1e0) (0xc000efc500) Stream removed, broadcasting: 1 I0325 23:50:44.638808 7 log.go:172] (0xc002deb1e0) (0xc001b494a0) Stream removed, broadcasting: 3 I0325 23:50:44.638841 7 log.go:172] (0xc002deb1e0) (0xc000b7c140) Stream removed, broadcasting: 5 Mar 25 23:50:44.638: INFO: Exec stderr: "" Mar 25 23:50:44.638: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.638: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.670022 7 log.go:172] (0xc002c26a50) (0xc0011b8d20) Create stream I0325 23:50:44.670051 7 log.go:172] (0xc002c26a50) (0xc0011b8d20) Stream added, broadcasting: 1 I0325 23:50:44.673011 7 log.go:172] (0xc002c26a50) Reply frame received for 1 I0325 23:50:44.673044 7 log.go:172] (0xc002c26a50) (0xc001406e60) Create stream I0325 23:50:44.673057 7 log.go:172] (0xc002c26a50) (0xc001406e60) Stream added, broadcasting: 3 I0325 23:50:44.674115 7 log.go:172] (0xc002c26a50) Reply frame received for 3 I0325 23:50:44.674152 7 log.go:172] (0xc002c26a50) (0xc001b495e0) Create stream I0325 23:50:44.674166 7 log.go:172] (0xc002c26a50) (0xc001b495e0) Stream added, broadcasting: 5 I0325 23:50:44.675011 7 log.go:172] (0xc002c26a50) Reply frame received for 5 I0325 23:50:44.741521 7 log.go:172] (0xc002c26a50) Data frame received for 5 I0325 23:50:44.741549 7 log.go:172] (0xc001b495e0) (5) Data frame handling I0325 23:50:44.741567 7 log.go:172] (0xc002c26a50) Data frame received for 3 I0325 23:50:44.741580 7 log.go:172] (0xc001406e60) (3) Data frame handling I0325 23:50:44.741588 7 log.go:172] (0xc001406e60) (3) Data frame sent I0325 23:50:44.741672 7 log.go:172] (0xc002c26a50) Data frame received for 3 I0325 23:50:44.741695 7 log.go:172] (0xc001406e60) (3) Data frame handling I0325 23:50:44.743313 7 log.go:172] (0xc002c26a50) Data frame received for 1 I0325 23:50:44.743366 7 log.go:172] (0xc0011b8d20) (1) Data frame handling I0325 23:50:44.743390 7 log.go:172] (0xc0011b8d20) (1) Data frame sent I0325 23:50:44.743412 7 log.go:172] (0xc002c26a50) (0xc0011b8d20) Stream removed, broadcasting: 1 I0325 23:50:44.743430 7 log.go:172] (0xc002c26a50) Go away received I0325 23:50:44.743541 7 log.go:172] (0xc002c26a50) (0xc0011b8d20) Stream removed, broadcasting: 1 I0325 23:50:44.743562 7 log.go:172] (0xc002c26a50) (0xc001406e60) Stream removed, broadcasting: 3 I0325 23:50:44.743591 7 log.go:172] (0xc002c26a50) (0xc001b495e0) Stream removed, broadcasting: 5 Mar 25 23:50:44.743: INFO: Exec stderr: "" Mar 25 23:50:44.743: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 23:50:44.743: INFO: >>> kubeConfig: /root/.kube/config I0325 23:50:44.781935 7 log.go:172] (0xc002c27080) (0xc0011b90e0) Create stream I0325 23:50:44.781957 7 log.go:172] (0xc002c27080) (0xc0011b90e0) Stream added, broadcasting: 1 I0325 23:50:44.784682 7 log.go:172] (0xc002c27080) Reply frame received for 1 I0325 23:50:44.784721 7 log.go:172] (0xc002c27080) (0xc001406f00) Create stream I0325 23:50:44.784749 7 log.go:172] (0xc002c27080) (0xc001406f00) Stream added, broadcasting: 3 I0325 23:50:44.785871 7 log.go:172] (0xc002c27080) Reply frame received for 3 I0325 23:50:44.785919 7 log.go:172] (0xc002c27080) (0xc001406fa0) Create stream I0325 23:50:44.785942 7 log.go:172] (0xc002c27080) (0xc001406fa0) Stream added, broadcasting: 5 I0325 23:50:44.787261 7 log.go:172] (0xc002c27080) Reply frame received for 5 I0325 23:50:44.848664 7 log.go:172] (0xc002c27080) Data frame received for 3 I0325 23:50:44.848701 7 log.go:172] (0xc001406f00) (3) Data frame handling I0325 23:50:44.848715 7 log.go:172] (0xc001406f00) (3) Data frame sent I0325 23:50:44.848736 7 log.go:172] (0xc002c27080) Data frame received for 5 I0325 23:50:44.848754 7 log.go:172] (0xc001406fa0) (5) Data frame handling I0325 23:50:44.850397 7 log.go:172] (0xc002c27080) Data frame received for 3 I0325 23:50:44.850436 7 log.go:172] (0xc001406f00) (3) Data frame handling I0325 23:50:44.857089 7 log.go:172] (0xc002c27080) Data frame received for 1 I0325 23:50:44.857283 7 log.go:172] (0xc0011b90e0) (1) Data frame handling I0325 23:50:44.857309 7 log.go:172] (0xc0011b90e0) (1) Data frame sent I0325 23:50:44.857529 7 log.go:172] (0xc002c27080) (0xc0011b90e0) Stream removed, broadcasting: 1 I0325 23:50:44.857561 7 log.go:172] (0xc002c27080) Go away received I0325 23:50:44.857628 7 log.go:172] (0xc002c27080) (0xc0011b90e0) Stream removed, broadcasting: 1 I0325 23:50:44.857642 7 log.go:172] (0xc002c27080) (0xc001406f00) Stream removed, broadcasting: 3 I0325 23:50:44.857651 7 log.go:172] (0xc002c27080) (0xc001406fa0) Stream removed, broadcasting: 5 Mar 25 23:50:44.857: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:44.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1365" for this suite. • [SLOW TEST:11.227 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":943,"failed":0} [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:44.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Mar 25 23:50:44.973: INFO: Waiting up to 5m0s for pod "pod-1144e0a7-de56-4bb1-93f1-f556ea562874" in namespace "emptydir-2395" to be "Succeeded or Failed" Mar 25 23:50:44.976: INFO: Pod "pod-1144e0a7-de56-4bb1-93f1-f556ea562874": Phase="Pending", Reason="", readiness=false. Elapsed: 3.431397ms Mar 25 23:50:46.980: INFO: Pod "pod-1144e0a7-de56-4bb1-93f1-f556ea562874": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007497071s Mar 25 23:50:48.986: INFO: Pod "pod-1144e0a7-de56-4bb1-93f1-f556ea562874": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013326333s STEP: Saw pod success Mar 25 23:50:48.986: INFO: Pod "pod-1144e0a7-de56-4bb1-93f1-f556ea562874" satisfied condition "Succeeded or Failed" Mar 25 23:50:49.001: INFO: Trying to get logs from node latest-worker pod pod-1144e0a7-de56-4bb1-93f1-f556ea562874 container test-container: STEP: delete the pod Mar 25 23:50:49.038: INFO: Waiting for pod pod-1144e0a7-de56-4bb1-93f1-f556ea562874 to disappear Mar 25 23:50:49.042: INFO: Pod pod-1144e0a7-de56-4bb1-93f1-f556ea562874 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:50:49.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2395" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":943,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:50:49.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:50:49.110: INFO: Create a RollingUpdate DaemonSet Mar 25 23:50:49.113: INFO: Check that daemon pods launch on every node of the cluster Mar 25 23:50:49.133: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:50:49.162: INFO: Number of nodes with available pods: 0 Mar 25 23:50:49.162: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:50:50.199: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:50:50.204: INFO: Number of nodes with available pods: 0 Mar 25 23:50:50.204: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:50:51.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:50:51.170: INFO: Number of nodes with available pods: 0 Mar 25 23:50:51.171: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:50:52.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:50:52.168: INFO: Number of nodes with available pods: 1 Mar 25 23:50:52.168: INFO: Node latest-worker is running more than one daemon pod Mar 25 23:50:53.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:50:53.170: INFO: Number of nodes with available pods: 2 Mar 25 23:50:53.170: INFO: Number of running nodes: 2, number of available pods: 2 Mar 25 23:50:53.170: INFO: Update the DaemonSet to trigger a rollout Mar 25 23:50:53.178: INFO: Updating DaemonSet daemon-set Mar 25 23:50:57.204: INFO: Roll back the DaemonSet before rollout is complete Mar 25 23:50:57.209: INFO: Updating DaemonSet daemon-set Mar 25 23:50:57.209: INFO: Make sure DaemonSet rollback is complete Mar 25 23:50:57.217: INFO: Wrong image for pod: daemon-set-5gb4h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 25 23:50:57.217: INFO: Pod daemon-set-5gb4h is not available Mar 25 23:50:57.223: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:50:58.227: INFO: Wrong image for pod: daemon-set-5gb4h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 25 23:50:58.227: INFO: Pod daemon-set-5gb4h is not available Mar 25 23:50:58.231: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:50:59.340: INFO: Wrong image for pod: daemon-set-5gb4h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 25 23:50:59.340: INFO: Pod daemon-set-5gb4h is not available Mar 25 23:50:59.439: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 23:51:00.226: INFO: Pod daemon-set-vnfdw is not available Mar 25 23:51:00.229: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5068, will wait for the garbage collector to delete the pods Mar 25 23:51:00.294: INFO: Deleting DaemonSet.extensions daemon-set took: 5.971328ms Mar 25 23:51:00.595: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275099ms Mar 25 23:51:04.098: INFO: Number of nodes with available pods: 0 Mar 25 23:51:04.098: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 23:51:04.100: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5068/daemonsets","resourceVersion":"2803587"},"items":null} Mar 25 23:51:04.102: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5068/pods","resourceVersion":"2803587"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:51:04.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5068" for this suite. • [SLOW TEST:15.066 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":57,"skipped":946,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:51:04.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:51:04.189: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 25 23:51:07.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8427 create -f -' Mar 25 23:51:10.237: INFO: stderr: "" Mar 25 23:51:10.237: INFO: stdout: "e2e-test-crd-publish-openapi-2642-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 25 23:51:10.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8427 delete e2e-test-crd-publish-openapi-2642-crds test-cr' Mar 25 23:51:10.397: INFO: stderr: "" Mar 25 23:51:10.397: INFO: stdout: "e2e-test-crd-publish-openapi-2642-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 25 23:51:10.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8427 apply -f -' Mar 25 23:51:10.638: INFO: stderr: "" Mar 25 23:51:10.638: INFO: stdout: "e2e-test-crd-publish-openapi-2642-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 25 23:51:10.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8427 delete e2e-test-crd-publish-openapi-2642-crds test-cr' Mar 25 23:51:10.759: INFO: stderr: "" Mar 25 23:51:10.759: INFO: stdout: "e2e-test-crd-publish-openapi-2642-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 25 23:51:10.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2642-crds' Mar 25 23:51:11.029: INFO: stderr: "" Mar 25 23:51:11.029: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2642-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:51:13.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8427" for this suite. • [SLOW TEST:9.803 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":58,"skipped":946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:51:13.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:51:13.981: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:51:16.001: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 23:51:17.984: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:19.983: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:21.985: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:23.985: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:25.985: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:27.995: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:29.984: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:31.984: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:33.984: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:35.990: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = false) Mar 25 23:51:37.985: INFO: The status of Pod test-webserver-2f7945b0-1438-4b00-8df3-ebafeaf2a7e2 is Running (Ready = true) Mar 25 23:51:37.988: INFO: Container started at 2020-03-25 23:51:16 +0000 UTC, pod became ready at 2020-03-25 23:51:36 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:51:37.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4216" for this suite. • [SLOW TEST:24.079 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:51:37.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 25 23:51:38.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d" in namespace "projected-2994" to be "Succeeded or Failed" Mar 25 23:51:38.104: INFO: Pod "downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.079364ms Mar 25 23:51:40.108: INFO: Pod "downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019204515s Mar 25 23:51:42.112: INFO: Pod "downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023505333s STEP: Saw pod success Mar 25 23:51:42.112: INFO: Pod "downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d" satisfied condition "Succeeded or Failed" Mar 25 23:51:42.116: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d container client-container: STEP: delete the pod Mar 25 23:51:42.129: INFO: Waiting for pod downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d to disappear Mar 25 23:51:42.134: INFO: Pod downwardapi-volume-9b4317e0-342f-4fa4-8f04-74c3be85639d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:51:42.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2994" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":1011,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:51:42.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 23:51:42.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 23:51:44.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777102, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777102, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777102, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 23:51:47.642: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:51:47.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:51:48.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8536" for this suite. STEP: Destroying namespace "webhook-8536-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.702 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":61,"skipped":1041,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:51:48.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0325 23:52:29.608967 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 23:52:29.609: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:52:29.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7986" for this suite. • [SLOW TEST:40.778 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":62,"skipped":1058,"failed":0} SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:52:29.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Mar 25 23:52:33.734: INFO: Pod pod-hostip-72593df7-4e5e-455b-9628-ee432d7f7be4 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:52:33.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8458" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1061,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:52:33.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:52:49.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5790" for this suite. STEP: Destroying namespace "nsdeletetest-1727" for this suite. Mar 25 23:52:49.021: INFO: Namespace nsdeletetest-1727 was already deleted STEP: Destroying namespace "nsdeletetest-7293" for this suite. • [SLOW TEST:15.284 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":64,"skipped":1064,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:52:49.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 25 23:52:49.090: INFO: Waiting up to 5m0s for pod "downward-api-082cdd31-6de4-46d5-a25c-417e30126186" in namespace "downward-api-4690" to be "Succeeded or Failed" Mar 25 23:52:49.093: INFO: Pod "downward-api-082cdd31-6de4-46d5-a25c-417e30126186": Phase="Pending", Reason="", readiness=false. Elapsed: 3.54908ms Mar 25 23:52:51.098: INFO: Pod "downward-api-082cdd31-6de4-46d5-a25c-417e30126186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007761921s Mar 25 23:52:53.101: INFO: Pod "downward-api-082cdd31-6de4-46d5-a25c-417e30126186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011654278s STEP: Saw pod success Mar 25 23:52:53.102: INFO: Pod "downward-api-082cdd31-6de4-46d5-a25c-417e30126186" satisfied condition "Succeeded or Failed" Mar 25 23:52:53.105: INFO: Trying to get logs from node latest-worker pod downward-api-082cdd31-6de4-46d5-a25c-417e30126186 container dapi-container: STEP: delete the pod Mar 25 23:52:53.137: INFO: Waiting for pod downward-api-082cdd31-6de4-46d5-a25c-417e30126186 to disappear Mar 25 23:52:53.154: INFO: Pod downward-api-082cdd31-6de4-46d5-a25c-417e30126186 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:52:53.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4690" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1086,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:52:53.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:53:06.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2641" for this suite. • [SLOW TEST:13.181 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":66,"skipped":1087,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:53:06.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:53:06.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4857" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":67,"skipped":1101,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:53:06.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:53:06.515: INFO: Waiting up to 5m0s for pod "busybox-user-65534-51034136-c35b-4051-b50c-71c2191896b7" in namespace "security-context-test-6866" to be "Succeeded or Failed" Mar 25 23:53:06.528: INFO: Pod "busybox-user-65534-51034136-c35b-4051-b50c-71c2191896b7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.123944ms Mar 25 23:53:08.532: INFO: Pod "busybox-user-65534-51034136-c35b-4051-b50c-71c2191896b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016962265s Mar 25 23:53:10.536: INFO: Pod "busybox-user-65534-51034136-c35b-4051-b50c-71c2191896b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02144216s Mar 25 23:53:10.536: INFO: Pod "busybox-user-65534-51034136-c35b-4051-b50c-71c2191896b7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:53:10.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6866" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1118,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:53:10.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 25 23:53:10.621: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9" in namespace "projected-694" to be "Succeeded or Failed" Mar 25 23:53:10.639: INFO: Pod "downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.859452ms Mar 25 23:53:12.643: INFO: Pod "downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021410828s Mar 25 23:53:14.647: INFO: Pod "downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025539832s STEP: Saw pod success Mar 25 23:53:14.647: INFO: Pod "downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9" satisfied condition "Succeeded or Failed" Mar 25 23:53:14.650: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9 container client-container: STEP: delete the pod Mar 25 23:53:14.682: INFO: Waiting for pod downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9 to disappear Mar 25 23:53:14.687: INFO: Pod downwardapi-volume-2afd88e8-5410-4913-9119-ca9cb891a7d9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:53:14.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-694" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1125,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:53:14.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 25 23:53:14.794: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a" in namespace "downward-api-624" to be "Succeeded or Failed" Mar 25 23:53:14.796: INFO: Pod "downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286097ms Mar 25 23:53:16.800: INFO: Pod "downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006442687s Mar 25 23:53:18.804: INFO: Pod "downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010430643s STEP: Saw pod success Mar 25 23:53:18.804: INFO: Pod "downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a" satisfied condition "Succeeded or Failed" Mar 25 23:53:18.808: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a container client-container: STEP: delete the pod Mar 25 23:53:18.839: INFO: Waiting for pod downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a to disappear Mar 25 23:53:18.848: INFO: Pod downwardapi-volume-55eb116b-1747-42bc-a277-d5ee5200d99a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:53:18.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-624" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1126,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:53:18.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9580 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9580;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9580 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9580;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9580.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9580.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9580.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9580.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9580.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9580.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9580.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9580.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9580.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.112.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.112.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.112.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.112.67_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9580 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9580;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9580 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9580;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9580.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9580.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9580.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9580.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9580.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9580.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9580.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9580.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9580.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9580.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.112.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.112.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.112.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.112.67_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 23:53:25.098: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.101: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.106: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.111: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.139: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.142: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.144: INFO: Unable to read jessie_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.146: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.149: INFO: Unable to read jessie_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:25.174: INFO: Lookups using dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9580 wheezy_tcp@dns-test-service.dns-9580 wheezy_udp@dns-test-service.dns-9580.svc wheezy_tcp@dns-test-service.dns-9580.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9580 jessie_tcp@dns-test-service.dns-9580 jessie_udp@dns-test-service.dns-9580.svc jessie_tcp@dns-test-service.dns-9580.svc] Mar 25 23:53:30.178: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.182: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.196: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.216: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.218: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.220: INFO: Unable to read jessie_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.224: INFO: Unable to read jessie_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.226: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:30.245: INFO: Lookups using dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9580 wheezy_tcp@dns-test-service.dns-9580 wheezy_udp@dns-test-service.dns-9580.svc wheezy_tcp@dns-test-service.dns-9580.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9580 jessie_tcp@dns-test-service.dns-9580 jessie_udp@dns-test-service.dns-9580.svc jessie_tcp@dns-test-service.dns-9580.svc] Mar 25 23:53:35.179: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.182: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.191: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.194: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.222: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.225: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.227: INFO: Unable to read jessie_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.232: INFO: Unable to read jessie_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.235: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:35.257: INFO: Lookups using dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9580 wheezy_tcp@dns-test-service.dns-9580 wheezy_udp@dns-test-service.dns-9580.svc wheezy_tcp@dns-test-service.dns-9580.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9580 jessie_tcp@dns-test-service.dns-9580 jessie_udp@dns-test-service.dns-9580.svc jessie_tcp@dns-test-service.dns-9580.svc] Mar 25 23:53:40.179: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.183: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.187: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.190: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.193: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.195: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.225: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.228: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.230: INFO: Unable to read jessie_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.233: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.236: INFO: Unable to read jessie_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.239: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:40.266: INFO: Lookups using dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9580 wheezy_tcp@dns-test-service.dns-9580 wheezy_udp@dns-test-service.dns-9580.svc wheezy_tcp@dns-test-service.dns-9580.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9580 jessie_tcp@dns-test-service.dns-9580 jessie_udp@dns-test-service.dns-9580.svc jessie_tcp@dns-test-service.dns-9580.svc] Mar 25 23:53:45.179: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.182: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.187: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.190: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.193: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.214: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.216: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.220: INFO: Unable to read jessie_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.226: INFO: Unable to read jessie_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:45.248: INFO: Lookups using dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9580 wheezy_tcp@dns-test-service.dns-9580 wheezy_udp@dns-test-service.dns-9580.svc wheezy_tcp@dns-test-service.dns-9580.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9580 jessie_tcp@dns-test-service.dns-9580 jessie_udp@dns-test-service.dns-9580.svc jessie_tcp@dns-test-service.dns-9580.svc] Mar 25 23:53:50.178: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.182: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.191: INFO: Unable to read wheezy_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.194: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.222: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.225: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.228: INFO: Unable to read jessie_udp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580 from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.235: INFO: Unable to read jessie_udp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.238: INFO: Unable to read jessie_tcp@dns-test-service.dns-9580.svc from pod dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da: the server could not find the requested resource (get pods dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da) Mar 25 23:53:50.262: INFO: Lookups using dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9580 wheezy_tcp@dns-test-service.dns-9580 wheezy_udp@dns-test-service.dns-9580.svc wheezy_tcp@dns-test-service.dns-9580.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9580 jessie_tcp@dns-test-service.dns-9580 jessie_udp@dns-test-service.dns-9580.svc jessie_tcp@dns-test-service.dns-9580.svc] Mar 25 23:53:55.261: INFO: DNS probes using dns-9580/dns-test-98d1ce77-8f43-41b0-9713-9aa9542160da succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:53:55.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9580" for this suite. • [SLOW TEST:36.827 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":71,"skipped":1144,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:53:55.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 25 23:53:56.054: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 25 23:54:01.069: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:54:02.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7240" for this suite. • [SLOW TEST:6.367 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":72,"skipped":1162,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:54:02.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Mar 25 23:54:02.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-4009 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 25 23:54:02.291: INFO: stderr: "" Mar 25 23:54:02.291: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Mar 25 23:54:02.291: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 25 23:54:02.291: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4009" to be "running and ready, or succeeded" Mar 25 23:54:02.299: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.415089ms Mar 25 23:54:04.304: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012548836s Mar 25 23:54:06.315: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.023351932s Mar 25 23:54:06.315: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 25 23:54:06.315: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 25 23:54:06.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4009' Mar 25 23:54:06.436: INFO: stderr: "" Mar 25 23:54:06.436: INFO: stdout: "I0325 23:54:04.462167 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/6hf4 255\nI0325 23:54:04.662470 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/pkx 395\nI0325 23:54:04.862302 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/k49 358\nI0325 23:54:05.062336 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/kpn 308\nI0325 23:54:05.262412 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/jn65 323\nI0325 23:54:05.462333 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kxzd 375\nI0325 23:54:05.662397 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/49t 297\nI0325 23:54:05.862369 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/mgtc 531\nI0325 23:54:06.062406 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/n6z4 541\nI0325 23:54:06.262292 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/8hq 549\n" STEP: limiting log lines Mar 25 23:54:06.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4009 --tail=1' Mar 25 23:54:06.538: INFO: stderr: "" Mar 25 23:54:06.538: INFO: stdout: "I0325 23:54:06.462333 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/smf 513\n" Mar 25 23:54:06.538: INFO: got output "I0325 23:54:06.462333 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/smf 513\n" STEP: limiting log bytes Mar 25 23:54:06.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4009 --limit-bytes=1' Mar 25 23:54:06.650: INFO: stderr: "" Mar 25 23:54:06.650: INFO: stdout: "I" Mar 25 23:54:06.650: INFO: got output "I" STEP: exposing timestamps Mar 25 23:54:06.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4009 --tail=1 --timestamps' Mar 25 23:54:06.755: INFO: stderr: "" Mar 25 23:54:06.755: INFO: stdout: "2020-03-25T23:54:06.662517386Z I0325 23:54:06.662341 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/f9xq 531\n" Mar 25 23:54:06.755: INFO: got output "2020-03-25T23:54:06.662517386Z I0325 23:54:06.662341 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/f9xq 531\n" STEP: restricting to a time range Mar 25 23:54:09.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4009 --since=1s' Mar 25 23:54:09.351: INFO: stderr: "" Mar 25 23:54:09.351: INFO: stdout: "I0325 23:54:08.462365 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/546l 563\nI0325 23:54:08.662362 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/bj4v 303\nI0325 23:54:08.862331 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/c4h 298\nI0325 23:54:09.062322 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/8dg 479\nI0325 23:54:09.262350 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/9vn 470\n" Mar 25 23:54:09.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4009 --since=24h' Mar 25 23:54:09.468: INFO: stderr: "" Mar 25 23:54:09.468: INFO: stdout: "I0325 23:54:04.462167 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/6hf4 255\nI0325 23:54:04.662470 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/pkx 395\nI0325 23:54:04.862302 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/k49 358\nI0325 23:54:05.062336 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/kpn 308\nI0325 23:54:05.262412 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/jn65 323\nI0325 23:54:05.462333 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kxzd 375\nI0325 23:54:05.662397 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/49t 297\nI0325 23:54:05.862369 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/mgtc 531\nI0325 23:54:06.062406 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/n6z4 541\nI0325 23:54:06.262292 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/8hq 549\nI0325 23:54:06.462333 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/smf 513\nI0325 23:54:06.662341 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/f9xq 531\nI0325 23:54:06.862345 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/mtg 204\nI0325 23:54:07.062390 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/v9k 378\nI0325 23:54:07.262315 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/vf9l 370\nI0325 23:54:07.462362 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/w4tn 356\nI0325 23:54:07.662350 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/68fk 237\nI0325 23:54:07.862317 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/vhv 332\nI0325 23:54:08.062318 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/925b 563\nI0325 23:54:08.262321 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/2rl 481\nI0325 23:54:08.462365 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/546l 563\nI0325 23:54:08.662362 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/bj4v 303\nI0325 23:54:08.862331 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/c4h 298\nI0325 23:54:09.062322 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/8dg 479\nI0325 23:54:09.262350 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/9vn 470\nI0325 23:54:09.462308 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/8s7 220\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Mar 25 23:54:09.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4009' Mar 25 23:54:11.543: INFO: stderr: "" Mar 25 23:54:11.543: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:54:11.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4009" for this suite. • [SLOW TEST:9.454 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":73,"skipped":1169,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:54:11.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0325 23:54:12.647742 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 23:54:12.647: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:54:12.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-770" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":74,"skipped":1173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:54:12.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-97ff6d6d-1955-4a78-940d-8dad004b8793 STEP: Creating a pod to test consume secrets Mar 25 23:54:12.776: INFO: Waiting up to 5m0s for pod "pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469" in namespace "secrets-220" to be "Succeeded or Failed" Mar 25 23:54:12.784: INFO: Pod "pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162254ms Mar 25 23:54:14.791: INFO: Pod "pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014609629s Mar 25 23:54:16.827: INFO: Pod "pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050889301s STEP: Saw pod success Mar 25 23:54:16.827: INFO: Pod "pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469" satisfied condition "Succeeded or Failed" Mar 25 23:54:16.895: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469 container secret-volume-test: STEP: delete the pod Mar 25 23:54:16.911: INFO: Waiting for pod pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469 to disappear Mar 25 23:54:16.916: INFO: Pod pod-secrets-7ccca4eb-57fc-4803-857c-ffda16792469 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:54:16.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-220" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:54:16.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-f33619f6-340b-4746-a9c3-60e391a36553 STEP: Creating a pod to test consume secrets Mar 25 23:54:17.015: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27" in namespace "projected-8762" to be "Succeeded or Failed" Mar 25 23:54:17.018: INFO: Pod "pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.928313ms Mar 25 23:54:19.023: INFO: Pod "pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007466309s Mar 25 23:54:21.027: INFO: Pod "pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011734332s STEP: Saw pod success Mar 25 23:54:21.027: INFO: Pod "pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27" satisfied condition "Succeeded or Failed" Mar 25 23:54:21.031: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27 container projected-secret-volume-test: STEP: delete the pod Mar 25 23:54:21.049: INFO: Waiting for pod pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27 to disappear Mar 25 23:54:21.054: INFO: Pod pod-projected-secrets-8ccecf19-eb2c-433e-a980-b3427ae93a27 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:54:21.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8762" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1256,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:54:21.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-20a65987-c5c0-4bd5-ab16-8a36860288ba STEP: Creating a pod to test consume secrets Mar 25 23:54:21.139: INFO: Waiting up to 5m0s for pod "pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616" in namespace "secrets-840" to be "Succeeded or Failed" Mar 25 23:54:21.152: INFO: Pod "pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616": Phase="Pending", Reason="", readiness=false. Elapsed: 12.628815ms Mar 25 23:54:23.156: INFO: Pod "pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016615069s Mar 25 23:54:25.160: INFO: Pod "pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020484844s STEP: Saw pod success Mar 25 23:54:25.160: INFO: Pod "pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616" satisfied condition "Succeeded or Failed" Mar 25 23:54:25.163: INFO: Trying to get logs from node latest-worker pod pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616 container secret-volume-test: STEP: delete the pod Mar 25 23:54:25.192: INFO: Waiting for pod pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616 to disappear Mar 25 23:54:25.204: INFO: Pod pod-secrets-a2e52759-8575-4e2e-bd54-c1139fa98616 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:54:25.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-840" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:54:25.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 25 23:54:25.246: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 23:54:25.274: INFO: Waiting for terminating namespaces to be deleted... Mar 25 23:54:25.277: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 25 23:54:25.282: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:54:25.282: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 23:54:25.282: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:54:25.282: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 23:54:25.282: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 25 23:54:25.287: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:54:25.287: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 23:54:25.287: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:54:25.287: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0fa38127-a65b-4969-b3d7-311c839ae9a3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0fa38127-a65b-4969-b3d7-311c839ae9a3 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0fa38127-a65b-4969-b3d7-311c839ae9a3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:54:33.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9489" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.226 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":78,"skipped":1310,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:54:33.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 25 23:54:33.514: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805045 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:54:33.515: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805045 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 25 23:54:43.523: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805093 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:54:43.523: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805093 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 25 23:54:53.530: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805123 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:54:53.530: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805123 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 25 23:55:03.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805150 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:55:03.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-a 3aac5d9c-f6d6-418c-925a-95828470fab2 2805150 0 2020-03-25 23:54:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 25 23:55:13.546: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-b 413b60b3-9803-4fb4-9749-2fb2e43d46a8 2805180 0 2020-03-25 23:55:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:55:13.546: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-b 413b60b3-9803-4fb4-9749-2fb2e43d46a8 2805180 0 2020-03-25 23:55:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 25 23:55:23.554: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-b 413b60b3-9803-4fb4-9749-2fb2e43d46a8 2805210 0 2020-03-25 23:55:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 23:55:23.554: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3203 /api/v1/namespaces/watch-3203/configmaps/e2e-watch-test-configmap-b 413b60b3-9803-4fb4-9749-2fb2e43d46a8 2805210 0 2020-03-25 23:55:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:55:33.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3203" for this suite. • [SLOW TEST:60.126 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":79,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:55:33.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 25 23:55:33.619: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 23:55:33.631: INFO: Waiting for terminating namespaces to be deleted... Mar 25 23:55:33.633: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 25 23:55:33.638: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:55:33.638: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 23:55:33.638: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:55:33.638: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 23:55:33.638: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 25 23:55:33.683: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:55:33.683: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 23:55:33.683: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 25 23:55:33.683: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 25 23:55:33.746: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Mar 25 23:55:33.746: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Mar 25 23:55:33.746: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Mar 25 23:55:33.746: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 25 23:55:33.746: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 25 23:55:33.752: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-44faa673-1731-4308-91f8-c24a785d9065.15ffb14bfbcbb937], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4712/filler-pod-44faa673-1731-4308-91f8-c24a785d9065 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-44faa673-1731-4308-91f8-c24a785d9065.15ffb14c7b27eca7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-44faa673-1731-4308-91f8-c24a785d9065.15ffb14cabeb3225], Reason = [Created], Message = [Created container filler-pod-44faa673-1731-4308-91f8-c24a785d9065] STEP: Considering event: Type = [Normal], Name = [filler-pod-44faa673-1731-4308-91f8-c24a785d9065.15ffb14cbadfa7d7], Reason = [Started], Message = [Started container filler-pod-44faa673-1731-4308-91f8-c24a785d9065] STEP: Considering event: Type = [Normal], Name = [filler-pod-77557eb2-647f-4576-a654-7f5d48ddd552.15ffb14bfb721698], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4712/filler-pod-77557eb2-647f-4576-a654-7f5d48ddd552 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-77557eb2-647f-4576-a654-7f5d48ddd552.15ffb14c45b1fb23], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-77557eb2-647f-4576-a654-7f5d48ddd552.15ffb14c824bcb3b], Reason = [Created], Message = [Created container filler-pod-77557eb2-647f-4576-a654-7f5d48ddd552] STEP: Considering event: Type = [Normal], Name = [filler-pod-77557eb2-647f-4576-a654-7f5d48ddd552.15ffb14c9c342ad1], Reason = [Started], Message = [Started container filler-pod-77557eb2-647f-4576-a654-7f5d48ddd552] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ffb14ceb459d84], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:55:38.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4712" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.337 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":80,"skipped":1362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:55:38.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:55:39.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2005" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1395,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:55:39.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-ea532189-e27f-42d5-a58a-36caea294d86 STEP: Creating secret with name s-test-opt-upd-9346176c-0d39-4344-894e-bdaeef3df0f5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ea532189-e27f-42d5-a58a-36caea294d86 STEP: Updating secret s-test-opt-upd-9346176c-0d39-4344-894e-bdaeef3df0f5 STEP: Creating secret with name s-test-opt-create-0d5bd0a5-9380-4487-8492-e74255c90d54 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:56:49.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6128" for this suite. • [SLOW TEST:70.530 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1397,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:56:49.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:56:49.644: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:56:56.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4042" for this suite. • [SLOW TEST:6.826 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":83,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:56:56.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-jj5r STEP: Creating a pod to test atomic-volume-subpath Mar 25 23:56:56.531: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jj5r" in namespace "subpath-8498" to be "Succeeded or Failed" Mar 25 23:56:56.535: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Pending", Reason="", readiness=false. Elapsed: 3.923725ms Mar 25 23:56:58.539: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007775847s Mar 25 23:57:00.543: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 4.011770848s Mar 25 23:57:02.548: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 6.017048508s Mar 25 23:57:04.552: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 8.020679453s Mar 25 23:57:06.556: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 10.024964911s Mar 25 23:57:08.560: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 12.028861847s Mar 25 23:57:10.564: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 14.032967176s Mar 25 23:57:12.568: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 16.036939s Mar 25 23:57:14.572: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 18.040768541s Mar 25 23:57:16.576: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 20.044888222s Mar 25 23:57:18.581: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Running", Reason="", readiness=true. Elapsed: 22.049518475s Mar 25 23:57:20.585: INFO: Pod "pod-subpath-test-downwardapi-jj5r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054111582s STEP: Saw pod success Mar 25 23:57:20.585: INFO: Pod "pod-subpath-test-downwardapi-jj5r" satisfied condition "Succeeded or Failed" Mar 25 23:57:20.588: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-jj5r container test-container-subpath-downwardapi-jj5r: STEP: delete the pod Mar 25 23:57:20.660: INFO: Waiting for pod pod-subpath-test-downwardapi-jj5r to disappear Mar 25 23:57:20.675: INFO: Pod pod-subpath-test-downwardapi-jj5r no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-jj5r Mar 25 23:57:20.675: INFO: Deleting pod "pod-subpath-test-downwardapi-jj5r" in namespace "subpath-8498" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:20.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8498" for this suite. • [SLOW TEST:24.257 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":84,"skipped":1437,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:20.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-92003231-7e69-45f7-905f-4d98a5eccef4 STEP: Creating a pod to test consume configMaps Mar 25 23:57:20.755: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06" in namespace "projected-9326" to be "Succeeded or Failed" Mar 25 23:57:20.784: INFO: Pod "pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06": Phase="Pending", Reason="", readiness=false. Elapsed: 28.863433ms Mar 25 23:57:22.788: INFO: Pod "pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033318752s Mar 25 23:57:24.793: INFO: Pod "pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03736116s STEP: Saw pod success Mar 25 23:57:24.793: INFO: Pod "pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06" satisfied condition "Succeeded or Failed" Mar 25 23:57:24.796: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06 container projected-configmap-volume-test: STEP: delete the pod Mar 25 23:57:24.812: INFO: Waiting for pod pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06 to disappear Mar 25 23:57:24.817: INFO: Pod pod-projected-configmaps-a377906b-9b5b-4e8f-9392-d15c4acdde06 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:24.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9326" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1443,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:24.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:57:24.872: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:25.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8525" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":86,"skipped":1443,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:25.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-63c215dc-3885-4f12-bb43-2a76f7e52077 STEP: Creating a pod to test consume configMaps Mar 25 23:57:25.574: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163" in namespace "projected-3639" to be "Succeeded or Failed" Mar 25 23:57:25.578: INFO: Pod "pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332753ms Mar 25 23:57:27.586: INFO: Pod "pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012372743s Mar 25 23:57:29.611: INFO: Pod "pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036659469s STEP: Saw pod success Mar 25 23:57:29.611: INFO: Pod "pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163" satisfied condition "Succeeded or Failed" Mar 25 23:57:29.614: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163 container projected-configmap-volume-test: STEP: delete the pod Mar 25 23:57:29.646: INFO: Waiting for pod pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163 to disappear Mar 25 23:57:29.656: INFO: Pod pod-projected-configmaps-7c3aebd4-3c31-4664-950f-f67af3ce5163 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:29.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3639" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1460,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:29.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-4e46f226-8aba-4943-ba84-48f5fcf94821 STEP: Creating a pod to test consume secrets Mar 25 23:57:29.772: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d" in namespace "projected-2328" to be "Succeeded or Failed" Mar 25 23:57:29.814: INFO: Pod "pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.22715ms Mar 25 23:57:31.817: INFO: Pod "pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045663569s Mar 25 23:57:33.821: INFO: Pod "pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049432252s STEP: Saw pod success Mar 25 23:57:33.821: INFO: Pod "pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d" satisfied condition "Succeeded or Failed" Mar 25 23:57:33.824: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d container projected-secret-volume-test: STEP: delete the pod Mar 25 23:57:33.850: INFO: Waiting for pod pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d to disappear Mar 25 23:57:33.867: INFO: Pod pod-projected-secrets-6f8913ec-1166-4827-92d7-f5e774ec8d2d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:33.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2328" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1474,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:33.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-6320 STEP: creating replication controller nodeport-test in namespace services-6320 I0325 23:57:34.052241 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6320, replica count: 2 I0325 23:57:37.102757 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 23:57:40.102996 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 23:57:40.103: INFO: Creating new exec pod Mar 25 23:57:45.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6320 execpodml4xp -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 25 23:57:45.369: INFO: stderr: "I0325 23:57:45.294718 726 log.go:172] (0xc000913810) (0xc000835f40) Create stream\nI0325 23:57:45.294778 726 log.go:172] (0xc000913810) (0xc000835f40) Stream added, broadcasting: 1\nI0325 23:57:45.305416 726 log.go:172] (0xc000913810) Reply frame received for 1\nI0325 23:57:45.305464 726 log.go:172] (0xc000913810) (0xc000665860) Create stream\nI0325 23:57:45.305477 726 log.go:172] (0xc000913810) (0xc000665860) Stream added, broadcasting: 3\nI0325 23:57:45.307449 726 log.go:172] (0xc000913810) Reply frame received for 3\nI0325 23:57:45.307498 726 log.go:172] (0xc000913810) (0xc000448c80) Create stream\nI0325 23:57:45.307515 726 log.go:172] (0xc000913810) (0xc000448c80) Stream added, broadcasting: 5\nI0325 23:57:45.308510 726 log.go:172] (0xc000913810) Reply frame received for 5\nI0325 23:57:45.362790 726 log.go:172] (0xc000913810) Data frame received for 5\nI0325 23:57:45.362890 726 log.go:172] (0xc000448c80) (5) Data frame handling\nI0325 23:57:45.362955 726 log.go:172] (0xc000448c80) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0325 23:57:45.363043 726 log.go:172] (0xc000913810) Data frame received for 5\nI0325 23:57:45.363068 726 log.go:172] (0xc000448c80) (5) Data frame handling\nI0325 23:57:45.363101 726 log.go:172] (0xc000448c80) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0325 23:57:45.363659 726 log.go:172] (0xc000913810) Data frame received for 5\nI0325 23:57:45.363681 726 log.go:172] (0xc000448c80) (5) Data frame handling\nI0325 23:57:45.363698 726 log.go:172] (0xc000913810) Data frame received for 3\nI0325 23:57:45.363746 726 log.go:172] (0xc000665860) (3) Data frame handling\nI0325 23:57:45.365612 726 log.go:172] (0xc000913810) Data frame received for 1\nI0325 23:57:45.365639 726 log.go:172] (0xc000835f40) (1) Data frame handling\nI0325 23:57:45.365654 726 log.go:172] (0xc000835f40) (1) Data frame sent\nI0325 23:57:45.365672 726 log.go:172] (0xc000913810) (0xc000835f40) Stream removed, broadcasting: 1\nI0325 23:57:45.365720 726 log.go:172] (0xc000913810) Go away received\nI0325 23:57:45.366101 726 log.go:172] (0xc000913810) (0xc000835f40) Stream removed, broadcasting: 1\nI0325 23:57:45.366119 726 log.go:172] (0xc000913810) (0xc000665860) Stream removed, broadcasting: 3\nI0325 23:57:45.366130 726 log.go:172] (0xc000913810) (0xc000448c80) Stream removed, broadcasting: 5\n" Mar 25 23:57:45.369: INFO: stdout: "" Mar 25 23:57:45.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6320 execpodml4xp -- /bin/sh -x -c nc -zv -t -w 2 10.96.16.98 80' Mar 25 23:57:45.576: INFO: stderr: "I0325 23:57:45.502364 746 log.go:172] (0xc00003a580) (0xc000741400) Create stream\nI0325 23:57:45.502473 746 log.go:172] (0xc00003a580) (0xc000741400) Stream added, broadcasting: 1\nI0325 23:57:45.506280 746 log.go:172] (0xc00003a580) Reply frame received for 1\nI0325 23:57:45.506316 746 log.go:172] (0xc00003a580) (0xc0007414a0) Create stream\nI0325 23:57:45.506325 746 log.go:172] (0xc00003a580) (0xc0007414a0) Stream added, broadcasting: 3\nI0325 23:57:45.507517 746 log.go:172] (0xc00003a580) Reply frame received for 3\nI0325 23:57:45.507550 746 log.go:172] (0xc00003a580) (0xc000a40000) Create stream\nI0325 23:57:45.507566 746 log.go:172] (0xc00003a580) (0xc000a40000) Stream added, broadcasting: 5\nI0325 23:57:45.508594 746 log.go:172] (0xc00003a580) Reply frame received for 5\nI0325 23:57:45.569410 746 log.go:172] (0xc00003a580) Data frame received for 3\nI0325 23:57:45.569457 746 log.go:172] (0xc0007414a0) (3) Data frame handling\nI0325 23:57:45.569513 746 log.go:172] (0xc00003a580) Data frame received for 5\nI0325 23:57:45.569563 746 log.go:172] (0xc000a40000) (5) Data frame handling\nI0325 23:57:45.569597 746 log.go:172] (0xc000a40000) (5) Data frame sent\nI0325 23:57:45.569614 746 log.go:172] (0xc00003a580) Data frame received for 5\nI0325 23:57:45.569625 746 log.go:172] (0xc000a40000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.16.98 80\nConnection to 10.96.16.98 80 port [tcp/http] succeeded!\nI0325 23:57:45.571341 746 log.go:172] (0xc00003a580) Data frame received for 1\nI0325 23:57:45.571362 746 log.go:172] (0xc000741400) (1) Data frame handling\nI0325 23:57:45.571375 746 log.go:172] (0xc000741400) (1) Data frame sent\nI0325 23:57:45.571390 746 log.go:172] (0xc00003a580) (0xc000741400) Stream removed, broadcasting: 1\nI0325 23:57:45.571476 746 log.go:172] (0xc00003a580) Go away received\nI0325 23:57:45.571790 746 log.go:172] (0xc00003a580) (0xc000741400) Stream removed, broadcasting: 1\nI0325 23:57:45.571812 746 log.go:172] (0xc00003a580) (0xc0007414a0) Stream removed, broadcasting: 3\nI0325 23:57:45.571825 746 log.go:172] (0xc00003a580) (0xc000a40000) Stream removed, broadcasting: 5\n" Mar 25 23:57:45.576: INFO: stdout: "" Mar 25 23:57:45.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6320 execpodml4xp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32725' Mar 25 23:57:45.782: INFO: stderr: "I0325 23:57:45.715631 769 log.go:172] (0xc0009bca50) (0xc0007e34a0) Create stream\nI0325 23:57:45.715707 769 log.go:172] (0xc0009bca50) (0xc0007e34a0) Stream added, broadcasting: 1\nI0325 23:57:45.719029 769 log.go:172] (0xc0009bca50) Reply frame received for 1\nI0325 23:57:45.719081 769 log.go:172] (0xc0009bca50) (0xc000540000) Create stream\nI0325 23:57:45.719187 769 log.go:172] (0xc0009bca50) (0xc000540000) Stream added, broadcasting: 3\nI0325 23:57:45.720232 769 log.go:172] (0xc0009bca50) Reply frame received for 3\nI0325 23:57:45.720276 769 log.go:172] (0xc0009bca50) (0xc0005400a0) Create stream\nI0325 23:57:45.720292 769 log.go:172] (0xc0009bca50) (0xc0005400a0) Stream added, broadcasting: 5\nI0325 23:57:45.721032 769 log.go:172] (0xc0009bca50) Reply frame received for 5\nI0325 23:57:45.777264 769 log.go:172] (0xc0009bca50) Data frame received for 5\nI0325 23:57:45.777312 769 log.go:172] (0xc0005400a0) (5) Data frame handling\nI0325 23:57:45.777327 769 log.go:172] (0xc0005400a0) (5) Data frame sent\nI0325 23:57:45.777337 769 log.go:172] (0xc0009bca50) Data frame received for 5\nI0325 23:57:45.777345 769 log.go:172] (0xc0005400a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32725\nConnection to 172.17.0.13 32725 port [tcp/32725] succeeded!\nI0325 23:57:45.777368 769 log.go:172] (0xc0009bca50) Data frame received for 3\nI0325 23:57:45.777381 769 log.go:172] (0xc000540000) (3) Data frame handling\nI0325 23:57:45.778715 769 log.go:172] (0xc0009bca50) Data frame received for 1\nI0325 23:57:45.778748 769 log.go:172] (0xc0007e34a0) (1) Data frame handling\nI0325 23:57:45.778762 769 log.go:172] (0xc0007e34a0) (1) Data frame sent\nI0325 23:57:45.778786 769 log.go:172] (0xc0009bca50) (0xc0007e34a0) Stream removed, broadcasting: 1\nI0325 23:57:45.778821 769 log.go:172] (0xc0009bca50) Go away received\nI0325 23:57:45.779297 769 log.go:172] (0xc0009bca50) (0xc0007e34a0) Stream removed, broadcasting: 1\nI0325 23:57:45.779324 769 log.go:172] (0xc0009bca50) (0xc000540000) Stream removed, broadcasting: 3\nI0325 23:57:45.779337 769 log.go:172] (0xc0009bca50) (0xc0005400a0) Stream removed, broadcasting: 5\n" Mar 25 23:57:45.782: INFO: stdout: "" Mar 25 23:57:45.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6320 execpodml4xp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32725' Mar 25 23:57:45.976: INFO: stderr: "I0325 23:57:45.901361 792 log.go:172] (0xc000ae2f20) (0xc0009b85a0) Create stream\nI0325 23:57:45.901420 792 log.go:172] (0xc000ae2f20) (0xc0009b85a0) Stream added, broadcasting: 1\nI0325 23:57:45.906480 792 log.go:172] (0xc000ae2f20) Reply frame received for 1\nI0325 23:57:45.906509 792 log.go:172] (0xc000ae2f20) (0xc00061b7c0) Create stream\nI0325 23:57:45.906516 792 log.go:172] (0xc000ae2f20) (0xc00061b7c0) Stream added, broadcasting: 3\nI0325 23:57:45.907334 792 log.go:172] (0xc000ae2f20) Reply frame received for 3\nI0325 23:57:45.907389 792 log.go:172] (0xc000ae2f20) (0xc0003e4be0) Create stream\nI0325 23:57:45.907417 792 log.go:172] (0xc000ae2f20) (0xc0003e4be0) Stream added, broadcasting: 5\nI0325 23:57:45.908184 792 log.go:172] (0xc000ae2f20) Reply frame received for 5\nI0325 23:57:45.968561 792 log.go:172] (0xc000ae2f20) Data frame received for 3\nI0325 23:57:45.968603 792 log.go:172] (0xc00061b7c0) (3) Data frame handling\nI0325 23:57:45.968840 792 log.go:172] (0xc000ae2f20) Data frame received for 5\nI0325 23:57:45.968914 792 log.go:172] (0xc0003e4be0) (5) Data frame handling\nI0325 23:57:45.968971 792 log.go:172] (0xc0003e4be0) (5) Data frame sent\nI0325 23:57:45.969015 792 log.go:172] (0xc000ae2f20) Data frame received for 5\nI0325 23:57:45.969035 792 log.go:172] (0xc0003e4be0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32725\nConnection to 172.17.0.12 32725 port [tcp/32725] succeeded!\nI0325 23:57:45.971396 792 log.go:172] (0xc000ae2f20) Data frame received for 1\nI0325 23:57:45.971508 792 log.go:172] (0xc0009b85a0) (1) Data frame handling\nI0325 23:57:45.971556 792 log.go:172] (0xc0009b85a0) (1) Data frame sent\nI0325 23:57:45.971581 792 log.go:172] (0xc000ae2f20) (0xc0009b85a0) Stream removed, broadcasting: 1\nI0325 23:57:45.971604 792 log.go:172] (0xc000ae2f20) Go away received\nI0325 23:57:45.972092 792 log.go:172] (0xc000ae2f20) (0xc0009b85a0) Stream removed, broadcasting: 1\nI0325 23:57:45.972121 792 log.go:172] (0xc000ae2f20) (0xc00061b7c0) Stream removed, broadcasting: 3\nI0325 23:57:45.972133 792 log.go:172] (0xc000ae2f20) (0xc0003e4be0) Stream removed, broadcasting: 5\n" Mar 25 23:57:45.976: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:45.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6320" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.110 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":89,"skipped":1477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:45.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:57:46.100: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a0b4df8b-5cec-400d-9568-e5dfca5c1e0d" in namespace "security-context-test-7770" to be "Succeeded or Failed" Mar 25 23:57:46.111: INFO: Pod "busybox-readonly-false-a0b4df8b-5cec-400d-9568-e5dfca5c1e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.060176ms Mar 25 23:57:48.116: INFO: Pod "busybox-readonly-false-a0b4df8b-5cec-400d-9568-e5dfca5c1e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015132597s Mar 25 23:57:50.120: INFO: Pod "busybox-readonly-false-a0b4df8b-5cec-400d-9568-e5dfca5c1e0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019382519s Mar 25 23:57:50.120: INFO: Pod "busybox-readonly-false-a0b4df8b-5cec-400d-9568-e5dfca5c1e0d" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:50.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7770" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1517,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:50.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-d5c2c696-676f-4da9-ae51-144a5d46d760 STEP: Creating a pod to test consume configMaps Mar 25 23:57:50.209: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e" in namespace "projected-3276" to be "Succeeded or Failed" Mar 25 23:57:50.214: INFO: Pod "pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.313101ms Mar 25 23:57:52.218: INFO: Pod "pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009474402s Mar 25 23:57:54.223: INFO: Pod "pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01416231s STEP: Saw pod success Mar 25 23:57:54.223: INFO: Pod "pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e" satisfied condition "Succeeded or Failed" Mar 25 23:57:54.226: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e container projected-configmap-volume-test: STEP: delete the pod Mar 25 23:57:54.255: INFO: Waiting for pod pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e to disappear Mar 25 23:57:54.282: INFO: Pod pod-projected-configmaps-a012cf75-28e3-4d50-9fcc-e334b0a92b5e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:54.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3276" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1533,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:54.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 25 23:57:54.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175" in namespace "downward-api-3312" to be "Succeeded or Failed" Mar 25 23:57:54.364: INFO: Pod "downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450458ms Mar 25 23:57:56.368: INFO: Pod "downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006833497s Mar 25 23:57:58.377: INFO: Pod "downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016113533s STEP: Saw pod success Mar 25 23:57:58.377: INFO: Pod "downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175" satisfied condition "Succeeded or Failed" Mar 25 23:57:58.380: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175 container client-container: STEP: delete the pod Mar 25 23:57:58.428: INFO: Waiting for pod downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175 to disappear Mar 25 23:57:58.443: INFO: Pod downwardapi-volume-da5d07ed-6b03-4c79-a0e6-67e45b945175 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:57:58.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3312" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1538,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:57:58.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b0cd7ee7-fed8-41c5-80b7-4f9dc1507257 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b0cd7ee7-fed8-41c5-80b7-4f9dc1507257 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:08.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1497" for this suite. • [SLOW TEST:70.443 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1548,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:08.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 25 23:59:08.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120" in namespace "projected-8027" to be "Succeeded or Failed" Mar 25 23:59:08.982: INFO: Pod "downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549969ms Mar 25 23:59:10.986: INFO: Pod "downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012787788s Mar 25 23:59:12.991: INFO: Pod "downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017260171s STEP: Saw pod success Mar 25 23:59:12.991: INFO: Pod "downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120" satisfied condition "Succeeded or Failed" Mar 25 23:59:12.994: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120 container client-container: STEP: delete the pod Mar 25 23:59:13.012: INFO: Waiting for pod downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120 to disappear Mar 25 23:59:13.031: INFO: Pod downwardapi-volume-89dc3531-9861-409c-b354-bb3631721120 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:13.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8027" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1552,"failed":0} SS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:13.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:59:13.115: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e9f98b8e-490c-4131-89f5-3507a60a1e26" in namespace "security-context-test-8827" to be "Succeeded or Failed" Mar 25 23:59:13.163: INFO: Pod "alpine-nnp-false-e9f98b8e-490c-4131-89f5-3507a60a1e26": Phase="Pending", Reason="", readiness=false. Elapsed: 48.196442ms Mar 25 23:59:15.167: INFO: Pod "alpine-nnp-false-e9f98b8e-490c-4131-89f5-3507a60a1e26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052188898s Mar 25 23:59:17.171: INFO: Pod "alpine-nnp-false-e9f98b8e-490c-4131-89f5-3507a60a1e26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056083233s Mar 25 23:59:17.171: INFO: Pod "alpine-nnp-false-e9f98b8e-490c-4131-89f5-3507a60a1e26" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:17.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8827" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:17.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:17.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6306" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":96,"skipped":1581,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:17.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 23:59:17.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 23:59:19.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777557, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777557, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777557, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777557, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 23:59:22.877: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:22.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-632" for this suite. STEP: Destroying namespace "webhook-632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.675 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":97,"skipped":1583,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:23.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:59:23.125: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 25 23:59:28.131: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 23:59:28.131: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 25 23:59:30.134: INFO: Creating deployment "test-rollover-deployment" Mar 25 23:59:30.160: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 25 23:59:32.175: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 25 23:59:32.180: INFO: Ensure that both replica sets have 1 created replica Mar 25 23:59:32.185: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 25 23:59:32.190: INFO: Updating deployment test-rollover-deployment Mar 25 23:59:32.190: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 25 23:59:34.230: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 25 23:59:34.236: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 25 23:59:34.241: INFO: all replica sets need to contain the pod-template-hash label Mar 25 23:59:34.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777572, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:59:36.263: INFO: all replica sets need to contain the pod-template-hash label Mar 25 23:59:36.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777575, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:59:38.305: INFO: all replica sets need to contain the pod-template-hash label Mar 25 23:59:38.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777575, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:59:40.249: INFO: all replica sets need to contain the pod-template-hash label Mar 25 23:59:40.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777575, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:59:42.250: INFO: all replica sets need to contain the pod-template-hash label Mar 25 23:59:42.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777575, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:59:44.248: INFO: all replica sets need to contain the pod-template-hash label Mar 25 23:59:44.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777575, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 23:59:46.250: INFO: Mar 25 23:59:46.250: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 25 23:59:46.256: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9677 /apis/apps/v1/namespaces/deployment-9677/deployments/test-rollover-deployment c7463ab3-e44b-4cd1-9287-8e78becbe271 2806647 2 2020-03-25 23:59:30 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003821ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-25 23:59:30 +0000 UTC,LastTransitionTime:2020-03-25 23:59:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-03-25 23:59:45 +0000 UTC,LastTransitionTime:2020-03-25 23:59:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 25 23:59:46.259: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-9677 /apis/apps/v1/namespaces/deployment-9677/replicasets/test-rollover-deployment-78df7bc796 a5db78e9-1166-4385-9e90-3e534f5dc7e1 2806636 2 2020-03-25 23:59:32 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c7463ab3-e44b-4cd1-9287-8e78becbe271 0xc002eb8c87 0xc002eb8c88}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002eb8cf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 23:59:46.259: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 25 23:59:46.259: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9677 /apis/apps/v1/namespaces/deployment-9677/replicasets/test-rollover-controller 78283384-b2c6-43fe-a923-5a37b9648e57 2806646 2 2020-03-25 23:59:23 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c7463ab3-e44b-4cd1-9287-8e78becbe271 0xc002eb8bb7 0xc002eb8bb8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002eb8c18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 23:59:46.259: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9677 /apis/apps/v1/namespaces/deployment-9677/replicasets/test-rollover-deployment-f6c94f66c b6a4404b-18cb-478d-a006-aac285736a3a 2806586 2 2020-03-25 23:59:30 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c7463ab3-e44b-4cd1-9287-8e78becbe271 0xc002eb8d60 0xc002eb8d61}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002eb8dd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 23:59:46.262: INFO: Pod "test-rollover-deployment-78df7bc796-ntlwm" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-ntlwm test-rollover-deployment-78df7bc796- deployment-9677 /api/v1/namespaces/deployment-9677/pods/test-rollover-deployment-78df7bc796-ntlwm d8177528-e216-439f-9f98-bd196377e887 2806602 0 2020-03-25 23:59:32 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 a5db78e9-1166-4385-9e90-3e534f5dc7e1 0xc0029cc327 0xc0029cc328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zdbxq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zdbxq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zdbxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.246,StartTime:2020-03-25 23:59:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-25 23:59:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://3bec9ce1f5300713d4a8ee0dd9bbf26cfbd07baca90d3538fb77c55156e934dc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:46.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9677" for this suite. • [SLOW TEST:23.231 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":98,"skipped":1585,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:46.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:59:46.374: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 6.790617ms) Mar 25 23:59:46.378: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.902076ms) Mar 25 23:59:46.383: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.254384ms) Mar 25 23:59:46.386: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.505772ms) Mar 25 23:59:46.389: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.924672ms) Mar 25 23:59:46.392: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.064662ms) Mar 25 23:59:46.395: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.907176ms) Mar 25 23:59:46.401: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.302973ms) Mar 25 23:59:46.404: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.115618ms) Mar 25 23:59:46.408: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.055228ms) Mar 25 23:59:46.413: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.658233ms) Mar 25 23:59:46.419: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 6.026207ms) Mar 25 23:59:46.422: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.7335ms) Mar 25 23:59:46.425: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.724344ms) Mar 25 23:59:46.427: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.418719ms) Mar 25 23:59:46.430: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.599847ms) Mar 25 23:59:46.433: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.47943ms) Mar 25 23:59:46.436: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.93258ms) Mar 25 23:59:46.438: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.804528ms) Mar 25 23:59:46.441: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.026404ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:46.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5398" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":99,"skipped":1588,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:46.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-9928/configmap-test-6f0f230e-fb58-4bea-b30c-d93263f7caaa STEP: Creating a pod to test consume configMaps Mar 25 23:59:46.541: INFO: Waiting up to 5m0s for pod "pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd" in namespace "configmap-9928" to be "Succeeded or Failed" Mar 25 23:59:46.544: INFO: Pod "pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.662148ms Mar 25 23:59:48.548: INFO: Pod "pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007002333s Mar 25 23:59:50.551: INFO: Pod "pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010548305s STEP: Saw pod success Mar 25 23:59:50.551: INFO: Pod "pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd" satisfied condition "Succeeded or Failed" Mar 25 23:59:50.555: INFO: Trying to get logs from node latest-worker pod pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd container env-test: STEP: delete the pod Mar 25 23:59:50.623: INFO: Waiting for pod pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd to disappear Mar 25 23:59:50.655: INFO: Pod pod-configmaps-19289787-65a1-4790-b567-1fda328d11fd no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:50.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9928" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1593,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:50.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 25 23:59:50.712: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 25 23:59:51.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3870" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":101,"skipped":1610,"failed":0} S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 25 23:59:51.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 25 23:59:56.243: INFO: &Pod{ObjectMeta:{send-events-6ef9c735-f441-4338-ac09-78594f035934 events-1601 /api/v1/namespaces/events-1601/pods/send-events-6ef9c735-f441-4338-ac09-78594f035934 56b396c9-98b9-49f2-983d-45433587ff61 2806771 0 2020-03-25 23:59:52 +0000 UTC map[name:foo time:208317536] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8gxzs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8gxzs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8gxzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-25 23:59:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.102,StartTime:2020-03-25 23:59:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-25 23:59:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://9516947ee5636d0e8b8eccbe726d0e693c2066e8f48dc5d2269411dbe0c6ea95,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 25 23:59:58.284: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 26 00:00:00.288: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:00:00.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1601" for this suite. • [SLOW TEST:8.413 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":102,"skipped":1611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:00:00.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 26 00:00:00.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 26 00:00:00.414: INFO: Waiting for terminating namespaces to be deleted... Mar 26 00:00:00.416: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 26 00:00:00.421: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:00:00.421: INFO: Container kube-proxy ready: true, restart count 0 Mar 26 00:00:00.421: INFO: send-events-6ef9c735-f441-4338-ac09-78594f035934 from events-1601 started at 2020-03-25 23:59:52 +0000 UTC (1 container statuses recorded) Mar 26 00:00:00.421: INFO: Container p ready: true, restart count 0 Mar 26 00:00:00.421: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:00:00.421: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 00:00:00.421: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 26 00:00:00.427: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:00:00.427: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 00:00:00.427: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:00:00.427: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c99044c4-1661-456f-84d3-d13afbc6153e 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-c99044c4-1661-456f-84d3-d13afbc6153e off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c99044c4-1661-456f-84d3-d13afbc6153e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:00:16.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2134" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.269 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":103,"skipped":1654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:00:16.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 26 00:00:16.760: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9265 /api/v1/namespaces/watch-9265/configmaps/e2e-watch-test-label-changed cde7df17-6a21-492f-8fce-89dc3d5cc67b 2806904 0 2020-03-26 00:00:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 26 00:00:16.761: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9265 /api/v1/namespaces/watch-9265/configmaps/e2e-watch-test-label-changed cde7df17-6a21-492f-8fce-89dc3d5cc67b 2806905 0 2020-03-26 00:00:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 26 00:00:16.761: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9265 /api/v1/namespaces/watch-9265/configmaps/e2e-watch-test-label-changed cde7df17-6a21-492f-8fce-89dc3d5cc67b 2806906 0 2020-03-26 00:00:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 26 00:00:26.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9265 /api/v1/namespaces/watch-9265/configmaps/e2e-watch-test-label-changed cde7df17-6a21-492f-8fce-89dc3d5cc67b 2806962 0 2020-03-26 00:00:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 26 00:00:26.784: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9265 /api/v1/namespaces/watch-9265/configmaps/e2e-watch-test-label-changed cde7df17-6a21-492f-8fce-89dc3d5cc67b 2806963 0 2020-03-26 00:00:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 26 00:00:26.784: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9265 /api/v1/namespaces/watch-9265/configmaps/e2e-watch-test-label-changed cde7df17-6a21-492f-8fce-89dc3d5cc67b 2806964 0 2020-03-26 00:00:16 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:00:26.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9265" for this suite. • [SLOW TEST:10.185 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":104,"skipped":1681,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:00:26.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1394 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1394 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1394 Mar 26 00:00:26.908: INFO: Found 0 stateful pods, waiting for 1 Mar 26 00:00:36.912: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 26 00:00:36.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 00:00:37.144: INFO: stderr: "I0326 00:00:37.030862 815 log.go:172] (0xc000a9c000) (0xc000687680) Create stream\nI0326 00:00:37.030907 815 log.go:172] (0xc000a9c000) (0xc000687680) Stream added, broadcasting: 1\nI0326 00:00:37.033904 815 log.go:172] (0xc000a9c000) Reply frame received for 1\nI0326 00:00:37.033946 815 log.go:172] (0xc000a9c000) (0xc000587720) Create stream\nI0326 00:00:37.033956 815 log.go:172] (0xc000a9c000) (0xc000587720) Stream added, broadcasting: 3\nI0326 00:00:37.034932 815 log.go:172] (0xc000a9c000) Reply frame received for 3\nI0326 00:00:37.034975 815 log.go:172] (0xc000a9c000) (0xc000687720) Create stream\nI0326 00:00:37.034987 815 log.go:172] (0xc000a9c000) (0xc000687720) Stream added, broadcasting: 5\nI0326 00:00:37.035907 815 log.go:172] (0xc000a9c000) Reply frame received for 5\nI0326 00:00:37.107395 815 log.go:172] (0xc000a9c000) Data frame received for 5\nI0326 00:00:37.107425 815 log.go:172] (0xc000687720) (5) Data frame handling\nI0326 00:00:37.107451 815 log.go:172] (0xc000687720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 00:00:37.137045 815 log.go:172] (0xc000a9c000) Data frame received for 3\nI0326 00:00:37.137071 815 log.go:172] (0xc000587720) (3) Data frame handling\nI0326 00:00:37.137356 815 log.go:172] (0xc000587720) (3) Data frame sent\nI0326 00:00:37.137378 815 log.go:172] (0xc000a9c000) Data frame received for 3\nI0326 00:00:37.137385 815 log.go:172] (0xc000587720) (3) Data frame handling\nI0326 00:00:37.137468 815 log.go:172] (0xc000a9c000) Data frame received for 5\nI0326 00:00:37.137493 815 log.go:172] (0xc000687720) (5) Data frame handling\nI0326 00:00:37.139566 815 log.go:172] (0xc000a9c000) Data frame received for 1\nI0326 00:00:37.139613 815 log.go:172] (0xc000687680) (1) Data frame handling\nI0326 00:00:37.139722 815 log.go:172] (0xc000687680) (1) Data frame sent\nI0326 00:00:37.139751 815 log.go:172] (0xc000a9c000) (0xc000687680) Stream removed, broadcasting: 1\nI0326 00:00:37.139779 815 log.go:172] (0xc000a9c000) Go away received\nI0326 00:00:37.140156 815 log.go:172] (0xc000a9c000) (0xc000687680) Stream removed, broadcasting: 1\nI0326 00:00:37.140180 815 log.go:172] (0xc000a9c000) (0xc000587720) Stream removed, broadcasting: 3\nI0326 00:00:37.140199 815 log.go:172] (0xc000a9c000) (0xc000687720) Stream removed, broadcasting: 5\n" Mar 26 00:00:37.144: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 00:00:37.144: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 26 00:00:37.151: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 26 00:00:47.155: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 26 00:00:47.155: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 00:00:47.172: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999901s Mar 26 00:00:48.177: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994390525s Mar 26 00:00:49.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989619686s Mar 26 00:00:50.188: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981873749s Mar 26 00:00:51.192: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978119579s Mar 26 00:00:52.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974136879s Mar 26 00:00:53.201: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969659114s Mar 26 00:00:54.206: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965274735s Mar 26 00:00:55.210: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.960766165s Mar 26 00:00:56.215: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.557792ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1394 Mar 26 00:00:57.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 00:00:57.428: INFO: stderr: "I0326 00:00:57.356403 836 log.go:172] (0xc0000e8370) (0xc0009ca000) Create stream\nI0326 00:00:57.356459 836 log.go:172] (0xc0000e8370) (0xc0009ca000) Stream added, broadcasting: 1\nI0326 00:00:57.364351 836 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0326 00:00:57.364399 836 log.go:172] (0xc0000e8370) (0xc0007070e0) Create stream\nI0326 00:00:57.364411 836 log.go:172] (0xc0000e8370) (0xc0007070e0) Stream added, broadcasting: 3\nI0326 00:00:57.365434 836 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0326 00:00:57.365471 836 log.go:172] (0xc0000e8370) (0xc0007072c0) Create stream\nI0326 00:00:57.365478 836 log.go:172] (0xc0000e8370) (0xc0007072c0) Stream added, broadcasting: 5\nI0326 00:00:57.366227 836 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0326 00:00:57.421800 836 log.go:172] (0xc0000e8370) Data frame received for 5\nI0326 00:00:57.421845 836 log.go:172] (0xc0007072c0) (5) Data frame handling\nI0326 00:00:57.421861 836 log.go:172] (0xc0007072c0) (5) Data frame sent\nI0326 00:00:57.421870 836 log.go:172] (0xc0000e8370) Data frame received for 5\nI0326 00:00:57.421878 836 log.go:172] (0xc0007072c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 00:00:57.421922 836 log.go:172] (0xc0000e8370) Data frame received for 3\nI0326 00:00:57.421952 836 log.go:172] (0xc0007070e0) (3) Data frame handling\nI0326 00:00:57.421970 836 log.go:172] (0xc0007070e0) (3) Data frame sent\nI0326 00:00:57.421981 836 log.go:172] (0xc0000e8370) Data frame received for 3\nI0326 00:00:57.421988 836 log.go:172] (0xc0007070e0) (3) Data frame handling\nI0326 00:00:57.423374 836 log.go:172] (0xc0000e8370) Data frame received for 1\nI0326 00:00:57.423407 836 log.go:172] (0xc0009ca000) (1) Data frame handling\nI0326 00:00:57.423427 836 log.go:172] (0xc0009ca000) (1) Data frame sent\nI0326 00:00:57.423454 836 log.go:172] (0xc0000e8370) (0xc0009ca000) Stream removed, broadcasting: 1\nI0326 00:00:57.423484 836 log.go:172] (0xc0000e8370) Go away received\nI0326 00:00:57.423889 836 log.go:172] (0xc0000e8370) (0xc0009ca000) Stream removed, broadcasting: 1\nI0326 00:00:57.423913 836 log.go:172] (0xc0000e8370) (0xc0007070e0) Stream removed, broadcasting: 3\nI0326 00:00:57.423925 836 log.go:172] (0xc0000e8370) (0xc0007072c0) Stream removed, broadcasting: 5\n" Mar 26 00:00:57.428: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 00:00:57.428: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 26 00:00:57.432: INFO: Found 1 stateful pods, waiting for 3 Mar 26 00:01:07.437: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:01:07.437: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:01:07.437: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 26 00:01:07.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 00:01:07.848: INFO: stderr: "I0326 00:01:07.778752 857 log.go:172] (0xc000bb6f20) (0xc0009bc500) Create stream\nI0326 00:01:07.778784 857 log.go:172] (0xc000bb6f20) (0xc0009bc500) Stream added, broadcasting: 1\nI0326 00:01:07.782924 857 log.go:172] (0xc000bb6f20) Reply frame received for 1\nI0326 00:01:07.782961 857 log.go:172] (0xc000bb6f20) (0xc00059b5e0) Create stream\nI0326 00:01:07.782976 857 log.go:172] (0xc000bb6f20) (0xc00059b5e0) Stream added, broadcasting: 3\nI0326 00:01:07.783891 857 log.go:172] (0xc000bb6f20) Reply frame received for 3\nI0326 00:01:07.783955 857 log.go:172] (0xc000bb6f20) (0xc0003d0a00) Create stream\nI0326 00:01:07.783980 857 log.go:172] (0xc000bb6f20) (0xc0003d0a00) Stream added, broadcasting: 5\nI0326 00:01:07.784836 857 log.go:172] (0xc000bb6f20) Reply frame received for 5\nI0326 00:01:07.843204 857 log.go:172] (0xc000bb6f20) Data frame received for 3\nI0326 00:01:07.843227 857 log.go:172] (0xc00059b5e0) (3) Data frame handling\nI0326 00:01:07.843241 857 log.go:172] (0xc00059b5e0) (3) Data frame sent\nI0326 00:01:07.843279 857 log.go:172] (0xc000bb6f20) Data frame received for 5\nI0326 00:01:07.843318 857 log.go:172] (0xc0003d0a00) (5) Data frame handling\nI0326 00:01:07.843337 857 log.go:172] (0xc0003d0a00) (5) Data frame sent\nI0326 00:01:07.843347 857 log.go:172] (0xc000bb6f20) Data frame received for 5\nI0326 00:01:07.843357 857 log.go:172] (0xc0003d0a00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 00:01:07.843410 857 log.go:172] (0xc000bb6f20) Data frame received for 3\nI0326 00:01:07.843429 857 log.go:172] (0xc00059b5e0) (3) Data frame handling\nI0326 00:01:07.844891 857 log.go:172] (0xc000bb6f20) Data frame received for 1\nI0326 00:01:07.844914 857 log.go:172] (0xc0009bc500) (1) Data frame handling\nI0326 00:01:07.844925 857 log.go:172] (0xc0009bc500) (1) Data frame sent\nI0326 00:01:07.844942 857 log.go:172] (0xc000bb6f20) (0xc0009bc500) Stream removed, broadcasting: 1\nI0326 00:01:07.844959 857 log.go:172] (0xc000bb6f20) Go away received\nI0326 00:01:07.845392 857 log.go:172] (0xc000bb6f20) (0xc0009bc500) Stream removed, broadcasting: 1\nI0326 00:01:07.845417 857 log.go:172] (0xc000bb6f20) (0xc00059b5e0) Stream removed, broadcasting: 3\nI0326 00:01:07.845425 857 log.go:172] (0xc000bb6f20) (0xc0003d0a00) Stream removed, broadcasting: 5\n" Mar 26 00:01:07.848: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 00:01:07.848: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 26 00:01:07.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 00:01:08.152: INFO: stderr: "I0326 00:01:08.016481 880 log.go:172] (0xc0003c88f0) (0xc000601400) Create stream\nI0326 00:01:08.016532 880 log.go:172] (0xc0003c88f0) (0xc000601400) Stream added, broadcasting: 1\nI0326 00:01:08.023487 880 log.go:172] (0xc0003c88f0) Reply frame received for 1\nI0326 00:01:08.023553 880 log.go:172] (0xc0003c88f0) (0xc000a92000) Create stream\nI0326 00:01:08.023578 880 log.go:172] (0xc0003c88f0) (0xc000a92000) Stream added, broadcasting: 3\nI0326 00:01:08.027511 880 log.go:172] (0xc0003c88f0) Reply frame received for 3\nI0326 00:01:08.027551 880 log.go:172] (0xc0003c88f0) (0xc0006014a0) Create stream\nI0326 00:01:08.027561 880 log.go:172] (0xc0003c88f0) (0xc0006014a0) Stream added, broadcasting: 5\nI0326 00:01:08.028379 880 log.go:172] (0xc0003c88f0) Reply frame received for 5\nI0326 00:01:08.091203 880 log.go:172] (0xc0003c88f0) Data frame received for 5\nI0326 00:01:08.091246 880 log.go:172] (0xc0006014a0) (5) Data frame handling\nI0326 00:01:08.091281 880 log.go:172] (0xc0006014a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 00:01:08.143551 880 log.go:172] (0xc0003c88f0) Data frame received for 3\nI0326 00:01:08.143616 880 log.go:172] (0xc000a92000) (3) Data frame handling\nI0326 00:01:08.143644 880 log.go:172] (0xc000a92000) (3) Data frame sent\nI0326 00:01:08.143665 880 log.go:172] (0xc0003c88f0) Data frame received for 3\nI0326 00:01:08.143690 880 log.go:172] (0xc000a92000) (3) Data frame handling\nI0326 00:01:08.143726 880 log.go:172] (0xc0003c88f0) Data frame received for 5\nI0326 00:01:08.143757 880 log.go:172] (0xc0006014a0) (5) Data frame handling\nI0326 00:01:08.145933 880 log.go:172] (0xc0003c88f0) Data frame received for 1\nI0326 00:01:08.145953 880 log.go:172] (0xc000601400) (1) Data frame handling\nI0326 00:01:08.145962 880 log.go:172] (0xc000601400) (1) Data frame sent\nI0326 00:01:08.145977 880 log.go:172] (0xc0003c88f0) (0xc000601400) Stream removed, broadcasting: 1\nI0326 00:01:08.145991 880 log.go:172] (0xc0003c88f0) Go away received\nI0326 00:01:08.146475 880 log.go:172] (0xc0003c88f0) (0xc000601400) Stream removed, broadcasting: 1\nI0326 00:01:08.146503 880 log.go:172] (0xc0003c88f0) (0xc000a92000) Stream removed, broadcasting: 3\nI0326 00:01:08.146517 880 log.go:172] (0xc0003c88f0) (0xc0006014a0) Stream removed, broadcasting: 5\n" Mar 26 00:01:08.152: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 00:01:08.152: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 26 00:01:08.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 00:01:08.377: INFO: stderr: "I0326 00:01:08.278437 903 log.go:172] (0xc00003a6e0) (0xc0006af2c0) Create stream\nI0326 00:01:08.278516 903 log.go:172] (0xc00003a6e0) (0xc0006af2c0) Stream added, broadcasting: 1\nI0326 00:01:08.281728 903 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0326 00:01:08.281779 903 log.go:172] (0xc00003a6e0) (0xc000a74000) Create stream\nI0326 00:01:08.281796 903 log.go:172] (0xc00003a6e0) (0xc000a74000) Stream added, broadcasting: 3\nI0326 00:01:08.282724 903 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0326 00:01:08.282757 903 log.go:172] (0xc00003a6e0) (0xc000afe000) Create stream\nI0326 00:01:08.282768 903 log.go:172] (0xc00003a6e0) (0xc000afe000) Stream added, broadcasting: 5\nI0326 00:01:08.283788 903 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0326 00:01:08.342692 903 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0326 00:01:08.342733 903 log.go:172] (0xc000afe000) (5) Data frame handling\nI0326 00:01:08.342753 903 log.go:172] (0xc000afe000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 00:01:08.371209 903 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0326 00:01:08.371254 903 log.go:172] (0xc000a74000) (3) Data frame handling\nI0326 00:01:08.371290 903 log.go:172] (0xc000a74000) (3) Data frame sent\nI0326 00:01:08.371458 903 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0326 00:01:08.371473 903 log.go:172] (0xc000a74000) (3) Data frame handling\nI0326 00:01:08.371540 903 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0326 00:01:08.371553 903 log.go:172] (0xc000afe000) (5) Data frame handling\nI0326 00:01:08.374019 903 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0326 00:01:08.374040 903 log.go:172] (0xc0006af2c0) (1) Data frame handling\nI0326 00:01:08.374048 903 log.go:172] (0xc0006af2c0) (1) Data frame sent\nI0326 00:01:08.374056 903 log.go:172] (0xc00003a6e0) (0xc0006af2c0) Stream removed, broadcasting: 1\nI0326 00:01:08.374178 903 log.go:172] (0xc00003a6e0) Go away received\nI0326 00:01:08.374349 903 log.go:172] (0xc00003a6e0) (0xc0006af2c0) Stream removed, broadcasting: 1\nI0326 00:01:08.374369 903 log.go:172] (0xc00003a6e0) (0xc000a74000) Stream removed, broadcasting: 3\nI0326 00:01:08.374377 903 log.go:172] (0xc00003a6e0) (0xc000afe000) Stream removed, broadcasting: 5\n" Mar 26 00:01:08.377: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 00:01:08.377: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 26 00:01:08.377: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 00:01:08.385: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 26 00:01:18.393: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 26 00:01:18.393: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 26 00:01:18.393: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 26 00:01:18.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999278s Mar 26 00:01:19.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989436624s Mar 26 00:01:20.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98379251s Mar 26 00:01:21.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978357744s Mar 26 00:01:22.431: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973454491s Mar 26 00:01:23.437: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.968343038s Mar 26 00:01:24.442: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962845455s Mar 26 00:01:25.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957717989s Mar 26 00:01:26.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.95259507s Mar 26 00:01:27.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 947.572938ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1394 Mar 26 00:01:28.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 00:01:31.015: INFO: stderr: "I0326 00:01:30.921950 923 log.go:172] (0xc000bd8000) (0xc0010f20a0) Create stream\nI0326 00:01:30.922006 923 log.go:172] (0xc000bd8000) (0xc0010f20a0) Stream added, broadcasting: 1\nI0326 00:01:30.925589 923 log.go:172] (0xc000bd8000) Reply frame received for 1\nI0326 00:01:30.925648 923 log.go:172] (0xc000bd8000) (0xc000930140) Create stream\nI0326 00:01:30.925663 923 log.go:172] (0xc000bd8000) (0xc000930140) Stream added, broadcasting: 3\nI0326 00:01:30.926866 923 log.go:172] (0xc000bd8000) Reply frame received for 3\nI0326 00:01:30.926923 923 log.go:172] (0xc000bd8000) (0xc0009301e0) Create stream\nI0326 00:01:30.926946 923 log.go:172] (0xc000bd8000) (0xc0009301e0) Stream added, broadcasting: 5\nI0326 00:01:30.928177 923 log.go:172] (0xc000bd8000) Reply frame received for 5\nI0326 00:01:31.009287 923 log.go:172] (0xc000bd8000) Data frame received for 3\nI0326 00:01:31.009329 923 log.go:172] (0xc000930140) (3) Data frame handling\nI0326 00:01:31.009345 923 log.go:172] (0xc000930140) (3) Data frame sent\nI0326 00:01:31.009356 923 log.go:172] (0xc000bd8000) Data frame received for 3\nI0326 00:01:31.009365 923 log.go:172] (0xc000930140) (3) Data frame handling\nI0326 00:01:31.009400 923 log.go:172] (0xc000bd8000) Data frame received for 5\nI0326 00:01:31.009413 923 log.go:172] (0xc0009301e0) (5) Data frame handling\nI0326 00:01:31.009424 923 log.go:172] (0xc0009301e0) (5) Data frame sent\nI0326 00:01:31.009432 923 log.go:172] (0xc000bd8000) Data frame received for 5\nI0326 00:01:31.009441 923 log.go:172] (0xc0009301e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 00:01:31.011235 923 log.go:172] (0xc000bd8000) Data frame received for 1\nI0326 00:01:31.011255 923 log.go:172] (0xc0010f20a0) (1) Data frame handling\nI0326 00:01:31.011264 923 log.go:172] (0xc0010f20a0) (1) Data frame sent\nI0326 00:01:31.011274 923 log.go:172] (0xc000bd8000) (0xc0010f20a0) Stream removed, broadcasting: 1\nI0326 00:01:31.011373 923 log.go:172] (0xc000bd8000) Go away received\nI0326 00:01:31.011540 923 log.go:172] (0xc000bd8000) (0xc0010f20a0) Stream removed, broadcasting: 1\nI0326 00:01:31.011556 923 log.go:172] (0xc000bd8000) (0xc000930140) Stream removed, broadcasting: 3\nI0326 00:01:31.011562 923 log.go:172] (0xc000bd8000) (0xc0009301e0) Stream removed, broadcasting: 5\n" Mar 26 00:01:31.015: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 00:01:31.015: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 26 00:01:31.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 00:01:31.219: INFO: stderr: "I0326 00:01:31.144577 958 log.go:172] (0xc0009b86e0) (0xc000924000) Create stream\nI0326 00:01:31.144651 958 log.go:172] (0xc0009b86e0) (0xc000924000) Stream added, broadcasting: 1\nI0326 00:01:31.148877 958 log.go:172] (0xc0009b86e0) Reply frame received for 1\nI0326 00:01:31.148923 958 log.go:172] (0xc0009b86e0) (0xc000a74000) Create stream\nI0326 00:01:31.148938 958 log.go:172] (0xc0009b86e0) (0xc000a74000) Stream added, broadcasting: 3\nI0326 00:01:31.150018 958 log.go:172] (0xc0009b86e0) Reply frame received for 3\nI0326 00:01:31.150046 958 log.go:172] (0xc0009b86e0) (0xc0007e3180) Create stream\nI0326 00:01:31.150056 958 log.go:172] (0xc0009b86e0) (0xc0007e3180) Stream added, broadcasting: 5\nI0326 00:01:31.150886 958 log.go:172] (0xc0009b86e0) Reply frame received for 5\nI0326 00:01:31.212553 958 log.go:172] (0xc0009b86e0) Data frame received for 5\nI0326 00:01:31.212638 958 log.go:172] (0xc0007e3180) (5) Data frame handling\nI0326 00:01:31.212655 958 log.go:172] (0xc0007e3180) (5) Data frame sent\nI0326 00:01:31.212674 958 log.go:172] (0xc0009b86e0) Data frame received for 5\nI0326 00:01:31.212703 958 log.go:172] (0xc0007e3180) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 00:01:31.212728 958 log.go:172] (0xc0009b86e0) Data frame received for 3\nI0326 00:01:31.212757 958 log.go:172] (0xc000a74000) (3) Data frame handling\nI0326 00:01:31.212781 958 log.go:172] (0xc000a74000) (3) Data frame sent\nI0326 00:01:31.212793 958 log.go:172] (0xc0009b86e0) Data frame received for 3\nI0326 00:01:31.212826 958 log.go:172] (0xc000a74000) (3) Data frame handling\nI0326 00:01:31.214402 958 log.go:172] (0xc0009b86e0) Data frame received for 1\nI0326 00:01:31.214444 958 log.go:172] (0xc000924000) (1) Data frame handling\nI0326 00:01:31.214476 958 log.go:172] (0xc000924000) (1) Data frame sent\nI0326 00:01:31.214504 958 log.go:172] (0xc0009b86e0) (0xc000924000) Stream removed, broadcasting: 1\nI0326 00:01:31.214537 958 log.go:172] (0xc0009b86e0) Go away received\nI0326 00:01:31.215086 958 log.go:172] (0xc0009b86e0) (0xc000924000) Stream removed, broadcasting: 1\nI0326 00:01:31.215113 958 log.go:172] (0xc0009b86e0) (0xc000a74000) Stream removed, broadcasting: 3\nI0326 00:01:31.215128 958 log.go:172] (0xc0009b86e0) (0xc0007e3180) Stream removed, broadcasting: 5\n" Mar 26 00:01:31.219: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 00:01:31.220: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 26 00:01:31.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1394 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 00:01:31.419: INFO: stderr: "I0326 00:01:31.340577 980 log.go:172] (0xc000934840) (0xc0004072c0) Create stream\nI0326 00:01:31.340639 980 log.go:172] (0xc000934840) (0xc0004072c0) Stream added, broadcasting: 1\nI0326 00:01:31.343519 980 log.go:172] (0xc000934840) Reply frame received for 1\nI0326 00:01:31.343566 980 log.go:172] (0xc000934840) (0xc000862000) Create stream\nI0326 00:01:31.343581 980 log.go:172] (0xc000934840) (0xc000862000) Stream added, broadcasting: 3\nI0326 00:01:31.344704 980 log.go:172] (0xc000934840) Reply frame received for 3\nI0326 00:01:31.344743 980 log.go:172] (0xc000934840) (0xc000407360) Create stream\nI0326 00:01:31.344754 980 log.go:172] (0xc000934840) (0xc000407360) Stream added, broadcasting: 5\nI0326 00:01:31.345897 980 log.go:172] (0xc000934840) Reply frame received for 5\nI0326 00:01:31.414058 980 log.go:172] (0xc000934840) Data frame received for 5\nI0326 00:01:31.414081 980 log.go:172] (0xc000407360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 00:01:31.414099 980 log.go:172] (0xc000934840) Data frame received for 3\nI0326 00:01:31.414124 980 log.go:172] (0xc000862000) (3) Data frame handling\nI0326 00:01:31.414136 980 log.go:172] (0xc000862000) (3) Data frame sent\nI0326 00:01:31.414148 980 log.go:172] (0xc000934840) Data frame received for 3\nI0326 00:01:31.414157 980 log.go:172] (0xc000862000) (3) Data frame handling\nI0326 00:01:31.414170 980 log.go:172] (0xc000407360) (5) Data frame sent\nI0326 00:01:31.414180 980 log.go:172] (0xc000934840) Data frame received for 5\nI0326 00:01:31.414188 980 log.go:172] (0xc000407360) (5) Data frame handling\nI0326 00:01:31.415723 980 log.go:172] (0xc000934840) Data frame received for 1\nI0326 00:01:31.415745 980 log.go:172] (0xc0004072c0) (1) Data frame handling\nI0326 00:01:31.415757 980 log.go:172] (0xc0004072c0) (1) Data frame sent\nI0326 00:01:31.415774 980 log.go:172] (0xc000934840) (0xc0004072c0) Stream removed, broadcasting: 1\nI0326 00:01:31.415795 980 log.go:172] (0xc000934840) Go away received\nI0326 00:01:31.416209 980 log.go:172] (0xc000934840) (0xc0004072c0) Stream removed, broadcasting: 1\nI0326 00:01:31.416232 980 log.go:172] (0xc000934840) (0xc000862000) Stream removed, broadcasting: 3\nI0326 00:01:31.416245 980 log.go:172] (0xc000934840) (0xc000407360) Stream removed, broadcasting: 5\n" Mar 26 00:01:31.419: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 00:01:31.419: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 26 00:01:31.419: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 26 00:02:01.435: INFO: Deleting all statefulset in ns statefulset-1394 Mar 26 00:02:01.438: INFO: Scaling statefulset ss to 0 Mar 26 00:02:01.447: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 00:02:01.450: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:02:01.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1394" for this suite. • [SLOW TEST:94.677 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":105,"skipped":1684,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:02:01.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 26 00:02:02.062: INFO: created pod pod-service-account-defaultsa Mar 26 00:02:02.063: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 26 00:02:02.066: INFO: created pod pod-service-account-mountsa Mar 26 00:02:02.066: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 26 00:02:02.072: INFO: created pod pod-service-account-nomountsa Mar 26 00:02:02.072: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 26 00:02:02.130: INFO: created pod pod-service-account-defaultsa-mountspec Mar 26 00:02:02.130: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 26 00:02:02.145: INFO: created pod pod-service-account-mountsa-mountspec Mar 26 00:02:02.145: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 26 00:02:02.200: INFO: created pod pod-service-account-nomountsa-mountspec Mar 26 00:02:02.200: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 26 00:02:02.285: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 26 00:02:02.285: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 26 00:02:02.330: INFO: created pod pod-service-account-mountsa-nomountspec Mar 26 00:02:02.330: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 26 00:02:02.356: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 26 00:02:02.356: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:02:02.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2687" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":106,"skipped":1685,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:02:02.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Mar 26 00:02:02.643: INFO: Waiting up to 5m0s for pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c" in namespace "containers-6260" to be "Succeeded or Failed" Mar 26 00:02:02.647: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.538492ms Mar 26 00:02:04.891: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24724914s Mar 26 00:02:07.406: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.763087499s Mar 26 00:02:09.556: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.912212389s Mar 26 00:02:11.578: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.934725669s Mar 26 00:02:13.599: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c": Phase="Running", Reason="", readiness=true. Elapsed: 10.955609145s Mar 26 00:02:15.603: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.959492951s STEP: Saw pod success Mar 26 00:02:15.603: INFO: Pod "client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c" satisfied condition "Succeeded or Failed" Mar 26 00:02:15.605: INFO: Trying to get logs from node latest-worker2 pod client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c container test-container: STEP: delete the pod Mar 26 00:02:15.711: INFO: Waiting for pod client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c to disappear Mar 26 00:02:15.719: INFO: Pod client-containers-3e248863-428e-4326-a8e1-6af2073e7b2c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:02:15.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6260" for this suite. • [SLOW TEST:13.240 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:02:15.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 26 00:02:15.780: INFO: namespace kubectl-3549 Mar 26 00:02:15.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3549' Mar 26 00:02:16.080: INFO: stderr: "" Mar 26 00:02:16.080: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 26 00:02:17.084: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:17.084: INFO: Found 0 / 1 Mar 26 00:02:18.091: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:18.091: INFO: Found 0 / 1 Mar 26 00:02:19.084: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:19.084: INFO: Found 1 / 1 Mar 26 00:02:19.084: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 26 00:02:19.086: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:19.086: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 26 00:02:19.086: INFO: wait on agnhost-master startup in kubectl-3549 Mar 26 00:02:19.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-5wsnr agnhost-master --namespace=kubectl-3549' Mar 26 00:02:19.204: INFO: stderr: "" Mar 26 00:02:19.204: INFO: stdout: "Paused\n" STEP: exposing RC Mar 26 00:02:19.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3549' Mar 26 00:02:19.343: INFO: stderr: "" Mar 26 00:02:19.343: INFO: stdout: "service/rm2 exposed\n" Mar 26 00:02:19.352: INFO: Service rm2 in namespace kubectl-3549 found. STEP: exposing service Mar 26 00:02:21.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3549' Mar 26 00:02:21.478: INFO: stderr: "" Mar 26 00:02:21.479: INFO: stdout: "service/rm3 exposed\n" Mar 26 00:02:21.510: INFO: Service rm3 in namespace kubectl-3549 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:02:23.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3549" for this suite. • [SLOW TEST:7.799 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":108,"skipped":1719,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:02:23.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 26 00:02:23.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4661' Mar 26 00:02:23.860: INFO: stderr: "" Mar 26 00:02:23.860: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 26 00:02:24.865: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:24.865: INFO: Found 0 / 1 Mar 26 00:02:25.864: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:25.864: INFO: Found 0 / 1 Mar 26 00:02:26.864: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:26.864: INFO: Found 1 / 1 Mar 26 00:02:26.864: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 26 00:02:26.868: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:26.868: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 26 00:02:26.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-94qhm --namespace=kubectl-4661 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 26 00:02:26.966: INFO: stderr: "" Mar 26 00:02:26.966: INFO: stdout: "pod/agnhost-master-94qhm patched\n" STEP: checking annotations Mar 26 00:02:26.974: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:02:26.974: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:02:26.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4661" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":109,"skipped":1731,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:02:26.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 26 00:02:31.084: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9851 PodName:pod-sharedvolume-102376e3-6990-4561-9716-4d5b44b01560 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 00:02:31.084: INFO: >>> kubeConfig: /root/.kube/config I0326 00:02:31.115179 7 log.go:172] (0xc0025d6000) (0xc000953400) Create stream I0326 00:02:31.115220 7 log.go:172] (0xc0025d6000) (0xc000953400) Stream added, broadcasting: 1 I0326 00:02:31.122428 7 log.go:172] (0xc0025d6000) Reply frame received for 1 I0326 00:02:31.122471 7 log.go:172] (0xc0025d6000) (0xc00045ba40) Create stream I0326 00:02:31.122481 7 log.go:172] (0xc0025d6000) (0xc00045ba40) Stream added, broadcasting: 3 I0326 00:02:31.123892 7 log.go:172] (0xc0025d6000) Reply frame received for 3 I0326 00:02:31.123914 7 log.go:172] (0xc0025d6000) (0xc00045bf40) Create stream I0326 00:02:31.123922 7 log.go:172] (0xc0025d6000) (0xc00045bf40) Stream added, broadcasting: 5 I0326 00:02:31.124833 7 log.go:172] (0xc0025d6000) Reply frame received for 5 I0326 00:02:31.174765 7 log.go:172] (0xc0025d6000) Data frame received for 3 I0326 00:02:31.174796 7 log.go:172] (0xc00045ba40) (3) Data frame handling I0326 00:02:31.174816 7 log.go:172] (0xc00045ba40) (3) Data frame sent I0326 00:02:31.174829 7 log.go:172] (0xc0025d6000) Data frame received for 3 I0326 00:02:31.174853 7 log.go:172] (0xc00045ba40) (3) Data frame handling I0326 00:02:31.175271 7 log.go:172] (0xc0025d6000) Data frame received for 5 I0326 00:02:31.175301 7 log.go:172] (0xc00045bf40) (5) Data frame handling I0326 00:02:31.176137 7 log.go:172] (0xc0025d6000) Data frame received for 1 I0326 00:02:31.176155 7 log.go:172] (0xc000953400) (1) Data frame handling I0326 00:02:31.176173 7 log.go:172] (0xc000953400) (1) Data frame sent I0326 00:02:31.176190 7 log.go:172] (0xc0025d6000) (0xc000953400) Stream removed, broadcasting: 1 I0326 00:02:31.176206 7 log.go:172] (0xc0025d6000) Go away received I0326 00:02:31.176297 7 log.go:172] (0xc0025d6000) (0xc000953400) Stream removed, broadcasting: 1 I0326 00:02:31.176316 7 log.go:172] (0xc0025d6000) (0xc00045ba40) Stream removed, broadcasting: 3 I0326 00:02:31.176322 7 log.go:172] (0xc0025d6000) (0xc00045bf40) Stream removed, broadcasting: 5 Mar 26 00:02:31.176: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:02:31.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9851" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":110,"skipped":1748,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:02:31.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 26 00:02:31.231: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:02:38.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3930" for this suite. • [SLOW TEST:7.645 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":111,"skipped":1753,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:02:38.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-09e1108f-716a-48e2-a039-8a978471b891 in namespace container-probe-5437 Mar 26 00:02:43.257: INFO: Started pod busybox-09e1108f-716a-48e2-a039-8a978471b891 in namespace container-probe-5437 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 00:02:43.260: INFO: Initial restart count of pod busybox-09e1108f-716a-48e2-a039-8a978471b891 is 0 Mar 26 00:03:31.365: INFO: Restart count of pod container-probe-5437/busybox-09e1108f-716a-48e2-a039-8a978471b891 is now 1 (48.105843288s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:03:31.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5437" for this suite. • [SLOW TEST:52.567 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1756,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:03:31.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-e313fb48-a951-43bd-9bec-517abf6f8258 STEP: Creating a pod to test consume secrets Mar 26 00:03:31.496: INFO: Waiting up to 5m0s for pod "pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2" in namespace "secrets-8816" to be "Succeeded or Failed" Mar 26 00:03:31.502: INFO: Pod "pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579497ms Mar 26 00:03:33.508: INFO: Pod "pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012421576s Mar 26 00:03:35.513: INFO: Pod "pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016932744s STEP: Saw pod success Mar 26 00:03:35.513: INFO: Pod "pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2" satisfied condition "Succeeded or Failed" Mar 26 00:03:35.516: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2 container secret-volume-test: STEP: delete the pod Mar 26 00:03:35.575: INFO: Waiting for pod pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2 to disappear Mar 26 00:03:35.591: INFO: Pod pod-secrets-a646034b-1bb3-4156-8539-c305c9a06db2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:03:35.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8816" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1761,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:03:35.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:03:39.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7223" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1768,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:03:39.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 26 00:03:40.490: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 26 00:03:42.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777820, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720777820, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:03:45.567: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:03:45.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:03:46.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6856" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.102 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":115,"skipped":1768,"failed":0} S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:03:46.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a Mar 26 00:03:46.929: INFO: Pod name my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a: Found 0 pods out of 1 Mar 26 00:03:51.932: INFO: Pod name my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a: Found 1 pods out of 1 Mar 26 00:03:51.932: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a" are running Mar 26 00:03:51.935: INFO: Pod "my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a-rmp96" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 00:03:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 00:03:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 00:03:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 00:03:46 +0000 UTC Reason: Message:}]) Mar 26 00:03:51.935: INFO: Trying to dial the pod Mar 26 00:03:56.946: INFO: Controller my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a: Got expected result from replica 1 [my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a-rmp96]: "my-hostname-basic-8a17c139-e186-4eca-a107-5763b461735a-rmp96", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:03:56.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1312" for this suite. • [SLOW TEST:10.133 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":116,"skipped":1769,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:03:56.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 26 00:03:57.008: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:04:03.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7995" for this suite. • [SLOW TEST:7.028 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":117,"skipped":1781,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:04:03.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 26 00:04:04.033: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:04:19.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-386" for this suite. • [SLOW TEST:15.697 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":118,"skipped":1796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:04:19.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:04:19.722: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 26 00:04:22.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-924 create -f -' Mar 26 00:04:25.542: INFO: stderr: "" Mar 26 00:04:25.542: INFO: stdout: "e2e-test-crd-publish-openapi-6955-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 26 00:04:25.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-924 delete e2e-test-crd-publish-openapi-6955-crds test-cr' Mar 26 00:04:25.679: INFO: stderr: "" Mar 26 00:04:25.679: INFO: stdout: "e2e-test-crd-publish-openapi-6955-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 26 00:04:25.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-924 apply -f -' Mar 26 00:04:25.950: INFO: stderr: "" Mar 26 00:04:25.950: INFO: stdout: "e2e-test-crd-publish-openapi-6955-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 26 00:04:25.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-924 delete e2e-test-crd-publish-openapi-6955-crds test-cr' Mar 26 00:04:26.055: INFO: stderr: "" Mar 26 00:04:26.055: INFO: stdout: "e2e-test-crd-publish-openapi-6955-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 26 00:04:26.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6955-crds' Mar 26 00:04:26.282: INFO: stderr: "" Mar 26 00:04:26.282: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6955-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:04:29.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-924" for this suite. • [SLOW TEST:9.530 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":119,"skipped":1831,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:04:29.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:04:29.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8782" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":120,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:04:29.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 26 00:04:29.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:29.500: INFO: Number of nodes with available pods: 0 Mar 26 00:04:29.500: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:30.505: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:30.508: INFO: Number of nodes with available pods: 0 Mar 26 00:04:30.508: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:31.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:31.624: INFO: Number of nodes with available pods: 0 Mar 26 00:04:31.624: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:32.505: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:32.509: INFO: Number of nodes with available pods: 0 Mar 26 00:04:32.509: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:33.505: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:33.508: INFO: Number of nodes with available pods: 1 Mar 26 00:04:33.508: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:34.505: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:34.508: INFO: Number of nodes with available pods: 2 Mar 26 00:04:34.508: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 26 00:04:34.546: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:34.551: INFO: Number of nodes with available pods: 1 Mar 26 00:04:34.551: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:35.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:35.560: INFO: Number of nodes with available pods: 1 Mar 26 00:04:35.560: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:36.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:36.561: INFO: Number of nodes with available pods: 1 Mar 26 00:04:36.561: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:04:37.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:04:37.564: INFO: Number of nodes with available pods: 2 Mar 26 00:04:37.564: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-981, will wait for the garbage collector to delete the pods Mar 26 00:04:37.627: INFO: Deleting DaemonSet.extensions daemon-set took: 6.429817ms Mar 26 00:04:45.328: INFO: Terminating DaemonSet.extensions daemon-set pods took: 7.700314203s Mar 26 00:04:53.035: INFO: Number of nodes with available pods: 0 Mar 26 00:04:53.035: INFO: Number of running nodes: 0, number of available pods: 0 Mar 26 00:04:53.037: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-981/daemonsets","resourceVersion":"2808571"},"items":null} Mar 26 00:04:53.039: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-981/pods","resourceVersion":"2808571"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:04:53.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-981" for this suite. • [SLOW TEST:23.662 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":121,"skipped":1925,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:04:53.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-071d6946-9fe7-4f8b-9bb1-ccf978c76e28 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:04:57.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7756" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":1940,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:04:57.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 26 00:04:57.251: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 26 00:05:07.703: INFO: >>> kubeConfig: /root/.kube/config Mar 26 00:05:10.611: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:05:20.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7369" for this suite. • [SLOW TEST:22.974 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":123,"skipped":1940,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:05:20.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9100 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 26 00:05:20.299: INFO: Found 0 stateful pods, waiting for 3 Mar 26 00:05:30.304: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:05:30.304: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:05:30.304: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:05:30.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9100 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 00:05:30.577: INFO: stderr: "I0326 00:05:30.446127 1247 log.go:172] (0xc00003b130) (0xc0006c7540) Create stream\nI0326 00:05:30.446202 1247 log.go:172] (0xc00003b130) (0xc0006c7540) Stream added, broadcasting: 1\nI0326 00:05:30.448450 1247 log.go:172] (0xc00003b130) Reply frame received for 1\nI0326 00:05:30.448488 1247 log.go:172] (0xc00003b130) (0xc000664be0) Create stream\nI0326 00:05:30.448498 1247 log.go:172] (0xc00003b130) (0xc000664be0) Stream added, broadcasting: 3\nI0326 00:05:30.449491 1247 log.go:172] (0xc00003b130) Reply frame received for 3\nI0326 00:05:30.449518 1247 log.go:172] (0xc00003b130) (0xc00077a000) Create stream\nI0326 00:05:30.449525 1247 log.go:172] (0xc00003b130) (0xc00077a000) Stream added, broadcasting: 5\nI0326 00:05:30.450264 1247 log.go:172] (0xc00003b130) Reply frame received for 5\nI0326 00:05:30.544482 1247 log.go:172] (0xc00003b130) Data frame received for 5\nI0326 00:05:30.544515 1247 log.go:172] (0xc00077a000) (5) Data frame handling\nI0326 00:05:30.544536 1247 log.go:172] (0xc00077a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 00:05:30.571899 1247 log.go:172] (0xc00003b130) Data frame received for 3\nI0326 00:05:30.571914 1247 log.go:172] (0xc000664be0) (3) Data frame handling\nI0326 00:05:30.571929 1247 log.go:172] (0xc000664be0) (3) Data frame sent\nI0326 00:05:30.572247 1247 log.go:172] (0xc00003b130) Data frame received for 3\nI0326 00:05:30.572279 1247 log.go:172] (0xc000664be0) (3) Data frame handling\nI0326 00:05:30.572302 1247 log.go:172] (0xc00003b130) Data frame received for 5\nI0326 00:05:30.572313 1247 log.go:172] (0xc00077a000) (5) Data frame handling\nI0326 00:05:30.574500 1247 log.go:172] (0xc00003b130) Data frame received for 1\nI0326 00:05:30.574523 1247 log.go:172] (0xc0006c7540) (1) Data frame handling\nI0326 00:05:30.574543 1247 log.go:172] (0xc0006c7540) (1) Data frame sent\nI0326 00:05:30.574566 1247 log.go:172] (0xc00003b130) (0xc0006c7540) Stream removed, broadcasting: 1\nI0326 00:05:30.574650 1247 log.go:172] (0xc00003b130) Go away received\nI0326 00:05:30.574822 1247 log.go:172] (0xc00003b130) (0xc0006c7540) Stream removed, broadcasting: 1\nI0326 00:05:30.574838 1247 log.go:172] (0xc00003b130) (0xc000664be0) Stream removed, broadcasting: 3\nI0326 00:05:30.574844 1247 log.go:172] (0xc00003b130) (0xc00077a000) Stream removed, broadcasting: 5\n" Mar 26 00:05:30.578: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 00:05:30.578: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 26 00:05:40.609: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 26 00:05:50.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9100 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 00:05:50.847: INFO: stderr: "I0326 00:05:50.764160 1266 log.go:172] (0xc0009bc0b0) (0xc0009f8000) Create stream\nI0326 00:05:50.764209 1266 log.go:172] (0xc0009bc0b0) (0xc0009f8000) Stream added, broadcasting: 1\nI0326 00:05:50.766737 1266 log.go:172] (0xc0009bc0b0) Reply frame received for 1\nI0326 00:05:50.766768 1266 log.go:172] (0xc0009bc0b0) (0xc00064d680) Create stream\nI0326 00:05:50.766776 1266 log.go:172] (0xc0009bc0b0) (0xc00064d680) Stream added, broadcasting: 3\nI0326 00:05:50.767665 1266 log.go:172] (0xc0009bc0b0) Reply frame received for 3\nI0326 00:05:50.767717 1266 log.go:172] (0xc0009bc0b0) (0xc0006db2c0) Create stream\nI0326 00:05:50.767735 1266 log.go:172] (0xc0009bc0b0) (0xc0006db2c0) Stream added, broadcasting: 5\nI0326 00:05:50.768626 1266 log.go:172] (0xc0009bc0b0) Reply frame received for 5\nI0326 00:05:50.840626 1266 log.go:172] (0xc0009bc0b0) Data frame received for 5\nI0326 00:05:50.840660 1266 log.go:172] (0xc0006db2c0) (5) Data frame handling\nI0326 00:05:50.840681 1266 log.go:172] (0xc0006db2c0) (5) Data frame sent\nI0326 00:05:50.840694 1266 log.go:172] (0xc0009bc0b0) Data frame received for 5\nI0326 00:05:50.840705 1266 log.go:172] (0xc0006db2c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 00:05:50.840930 1266 log.go:172] (0xc0009bc0b0) Data frame received for 3\nI0326 00:05:50.840963 1266 log.go:172] (0xc00064d680) (3) Data frame handling\nI0326 00:05:50.840984 1266 log.go:172] (0xc00064d680) (3) Data frame sent\nI0326 00:05:50.840996 1266 log.go:172] (0xc0009bc0b0) Data frame received for 3\nI0326 00:05:50.841026 1266 log.go:172] (0xc00064d680) (3) Data frame handling\nI0326 00:05:50.842523 1266 log.go:172] (0xc0009bc0b0) Data frame received for 1\nI0326 00:05:50.842543 1266 log.go:172] (0xc0009f8000) (1) Data frame handling\nI0326 00:05:50.842552 1266 log.go:172] (0xc0009f8000) (1) Data frame sent\nI0326 00:05:50.842561 1266 log.go:172] (0xc0009bc0b0) (0xc0009f8000) Stream removed, broadcasting: 1\nI0326 00:05:50.842592 1266 log.go:172] (0xc0009bc0b0) Go away received\nI0326 00:05:50.842997 1266 log.go:172] (0xc0009bc0b0) (0xc0009f8000) Stream removed, broadcasting: 1\nI0326 00:05:50.843021 1266 log.go:172] (0xc0009bc0b0) (0xc00064d680) Stream removed, broadcasting: 3\nI0326 00:05:50.843034 1266 log.go:172] (0xc0009bc0b0) (0xc0006db2c0) Stream removed, broadcasting: 5\n" Mar 26 00:05:50.847: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 00:05:50.847: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 26 00:06:00.867: INFO: Waiting for StatefulSet statefulset-9100/ss2 to complete update Mar 26 00:06:00.867: INFO: Waiting for Pod statefulset-9100/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 26 00:06:00.867: INFO: Waiting for Pod statefulset-9100/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 26 00:06:00.867: INFO: Waiting for Pod statefulset-9100/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 26 00:06:10.875: INFO: Waiting for StatefulSet statefulset-9100/ss2 to complete update Mar 26 00:06:10.875: INFO: Waiting for Pod statefulset-9100/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 26 00:06:10.875: INFO: Waiting for Pod statefulset-9100/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 26 00:06:20.875: INFO: Waiting for StatefulSet statefulset-9100/ss2 to complete update STEP: Rolling back to a previous revision Mar 26 00:06:30.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9100 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 00:06:31.126: INFO: stderr: "I0326 00:06:31.011858 1289 log.go:172] (0xc0005f9a20) (0xc000a1a000) Create stream\nI0326 00:06:31.011922 1289 log.go:172] (0xc0005f9a20) (0xc000a1a000) Stream added, broadcasting: 1\nI0326 00:06:31.015264 1289 log.go:172] (0xc0005f9a20) Reply frame received for 1\nI0326 00:06:31.015310 1289 log.go:172] (0xc0005f9a20) (0xc00073b400) Create stream\nI0326 00:06:31.015323 1289 log.go:172] (0xc0005f9a20) (0xc00073b400) Stream added, broadcasting: 3\nI0326 00:06:31.016314 1289 log.go:172] (0xc0005f9a20) Reply frame received for 3\nI0326 00:06:31.016352 1289 log.go:172] (0xc0005f9a20) (0xc00073b5e0) Create stream\nI0326 00:06:31.016365 1289 log.go:172] (0xc0005f9a20) (0xc00073b5e0) Stream added, broadcasting: 5\nI0326 00:06:31.017689 1289 log.go:172] (0xc0005f9a20) Reply frame received for 5\nI0326 00:06:31.087851 1289 log.go:172] (0xc0005f9a20) Data frame received for 5\nI0326 00:06:31.087880 1289 log.go:172] (0xc00073b5e0) (5) Data frame handling\nI0326 00:06:31.087900 1289 log.go:172] (0xc00073b5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 00:06:31.118679 1289 log.go:172] (0xc0005f9a20) Data frame received for 5\nI0326 00:06:31.118745 1289 log.go:172] (0xc00073b5e0) (5) Data frame handling\nI0326 00:06:31.118781 1289 log.go:172] (0xc0005f9a20) Data frame received for 3\nI0326 00:06:31.118813 1289 log.go:172] (0xc00073b400) (3) Data frame handling\nI0326 00:06:31.118846 1289 log.go:172] (0xc00073b400) (3) Data frame sent\nI0326 00:06:31.118948 1289 log.go:172] (0xc0005f9a20) Data frame received for 3\nI0326 00:06:31.118979 1289 log.go:172] (0xc00073b400) (3) Data frame handling\nI0326 00:06:31.120847 1289 log.go:172] (0xc0005f9a20) Data frame received for 1\nI0326 00:06:31.120876 1289 log.go:172] (0xc000a1a000) (1) Data frame handling\nI0326 00:06:31.120896 1289 log.go:172] (0xc000a1a000) (1) Data frame sent\nI0326 00:06:31.120914 1289 log.go:172] (0xc0005f9a20) (0xc000a1a000) Stream removed, broadcasting: 1\nI0326 00:06:31.120941 1289 log.go:172] (0xc0005f9a20) Go away received\nI0326 00:06:31.121667 1289 log.go:172] (0xc0005f9a20) (0xc000a1a000) Stream removed, broadcasting: 1\nI0326 00:06:31.121699 1289 log.go:172] (0xc0005f9a20) (0xc00073b400) Stream removed, broadcasting: 3\nI0326 00:06:31.121722 1289 log.go:172] (0xc0005f9a20) (0xc00073b5e0) Stream removed, broadcasting: 5\n" Mar 26 00:06:31.126: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 00:06:31.126: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 26 00:06:41.158: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 26 00:06:51.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9100 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 00:06:51.441: INFO: stderr: "I0326 00:06:51.336923 1310 log.go:172] (0xc0000eab00) (0xc00083b400) Create stream\nI0326 00:06:51.336962 1310 log.go:172] (0xc0000eab00) (0xc00083b400) Stream added, broadcasting: 1\nI0326 00:06:51.344540 1310 log.go:172] (0xc0000eab00) Reply frame received for 1\nI0326 00:06:51.344595 1310 log.go:172] (0xc0000eab00) (0xc00083b5e0) Create stream\nI0326 00:06:51.344609 1310 log.go:172] (0xc0000eab00) (0xc00083b5e0) Stream added, broadcasting: 3\nI0326 00:06:51.347048 1310 log.go:172] (0xc0000eab00) Reply frame received for 3\nI0326 00:06:51.347104 1310 log.go:172] (0xc0000eab00) (0xc000960000) Create stream\nI0326 00:06:51.347128 1310 log.go:172] (0xc0000eab00) (0xc000960000) Stream added, broadcasting: 5\nI0326 00:06:51.348208 1310 log.go:172] (0xc0000eab00) Reply frame received for 5\nI0326 00:06:51.434289 1310 log.go:172] (0xc0000eab00) Data frame received for 5\nI0326 00:06:51.434325 1310 log.go:172] (0xc000960000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 00:06:51.434359 1310 log.go:172] (0xc0000eab00) Data frame received for 3\nI0326 00:06:51.434387 1310 log.go:172] (0xc00083b5e0) (3) Data frame handling\nI0326 00:06:51.434400 1310 log.go:172] (0xc00083b5e0) (3) Data frame sent\nI0326 00:06:51.434412 1310 log.go:172] (0xc0000eab00) Data frame received for 3\nI0326 00:06:51.434422 1310 log.go:172] (0xc00083b5e0) (3) Data frame handling\nI0326 00:06:51.434442 1310 log.go:172] (0xc000960000) (5) Data frame sent\nI0326 00:06:51.434470 1310 log.go:172] (0xc0000eab00) Data frame received for 5\nI0326 00:06:51.434483 1310 log.go:172] (0xc000960000) (5) Data frame handling\nI0326 00:06:51.436225 1310 log.go:172] (0xc0000eab00) Data frame received for 1\nI0326 00:06:51.436246 1310 log.go:172] (0xc00083b400) (1) Data frame handling\nI0326 00:06:51.436257 1310 log.go:172] (0xc00083b400) (1) Data frame sent\nI0326 00:06:51.436267 1310 log.go:172] (0xc0000eab00) (0xc00083b400) Stream removed, broadcasting: 1\nI0326 00:06:51.436435 1310 log.go:172] (0xc0000eab00) Go away received\nI0326 00:06:51.436708 1310 log.go:172] (0xc0000eab00) (0xc00083b400) Stream removed, broadcasting: 1\nI0326 00:06:51.436730 1310 log.go:172] (0xc0000eab00) (0xc00083b5e0) Stream removed, broadcasting: 3\nI0326 00:06:51.436743 1310 log.go:172] (0xc0000eab00) (0xc000960000) Stream removed, broadcasting: 5\n" Mar 26 00:06:51.441: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 00:06:51.441: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 26 00:07:01.462: INFO: Waiting for StatefulSet statefulset-9100/ss2 to complete update Mar 26 00:07:01.462: INFO: Waiting for Pod statefulset-9100/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 26 00:07:01.462: INFO: Waiting for Pod statefulset-9100/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 26 00:07:01.462: INFO: Waiting for Pod statefulset-9100/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 26 00:07:11.472: INFO: Waiting for StatefulSet statefulset-9100/ss2 to complete update Mar 26 00:07:11.472: INFO: Waiting for Pod statefulset-9100/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 26 00:07:21.471: INFO: Deleting all statefulset in ns statefulset-9100 Mar 26 00:07:21.473: INFO: Scaling statefulset ss2 to 0 Mar 26 00:07:41.506: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 00:07:41.509: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:07:41.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9100" for this suite. • [SLOW TEST:141.355 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":124,"skipped":1949,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:07:41.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:07:41.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4684" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":125,"skipped":1952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:07:41.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:07:42.217: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:07:44.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778062, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778062, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778062, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778062, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:07:47.289: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 26 00:07:51.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-1362 to-be-attached-pod -i -c=container1' Mar 26 00:07:51.472: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:07:51.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1362" for this suite. STEP: Destroying namespace "webhook-1362-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.916 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":126,"skipped":1979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:07:51.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:02.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5003" for this suite. • [SLOW TEST:11.226 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":127,"skipped":2004,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:02.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:06.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8349" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2010,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:06.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:07.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1297" for this suite. STEP: Destroying namespace "nspatchtest-88efe144-02bc-4713-bb52-d60315ef92bd-5211" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":129,"skipped":2017,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:07.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:08:07.176: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 13.908192ms) Mar 26 00:08:07.199: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 23.116307ms) Mar 26 00:08:07.203: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.808328ms) Mar 26 00:08:07.207: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.994412ms) Mar 26 00:08:07.211: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.820724ms) Mar 26 00:08:07.215: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.717714ms) Mar 26 00:08:07.218: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.505289ms) Mar 26 00:08:07.222: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.587507ms) Mar 26 00:08:07.225: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.65603ms) Mar 26 00:08:07.229: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.942012ms) Mar 26 00:08:07.273: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 43.839823ms) Mar 26 00:08:07.277: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.740068ms) Mar 26 00:08:07.281: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.094669ms) Mar 26 00:08:07.285: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.959804ms) Mar 26 00:08:07.288: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.019948ms) Mar 26 00:08:07.291: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.937128ms) Mar 26 00:08:07.295: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.405635ms) Mar 26 00:08:07.298: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.296191ms) Mar 26 00:08:07.302: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.595541ms) Mar 26 00:08:07.305: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.659675ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:07.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6762" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":130,"skipped":2027,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:07.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:08:07.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628" in namespace "downward-api-1784" to be "Succeeded or Failed" Mar 26 00:08:07.418: INFO: Pod "downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628": Phase="Pending", Reason="", readiness=false. Elapsed: 5.542942ms Mar 26 00:08:09.423: INFO: Pod "downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009901525s Mar 26 00:08:11.427: INFO: Pod "downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014599571s STEP: Saw pod success Mar 26 00:08:11.427: INFO: Pod "downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628" satisfied condition "Succeeded or Failed" Mar 26 00:08:11.431: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628 container client-container: STEP: delete the pod Mar 26 00:08:11.468: INFO: Waiting for pod downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628 to disappear Mar 26 00:08:11.477: INFO: Pod downwardapi-volume-57c341f6-c5ca-4cd2-ac7c-5b5034e44628 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:11.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1784" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:11.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:08:11.605: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0" in namespace "downward-api-7753" to be "Succeeded or Failed" Mar 26 00:08:11.619: INFO: Pod "downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.03524ms Mar 26 00:08:13.631: INFO: Pod "downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026047705s Mar 26 00:08:15.667: INFO: Pod "downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062123735s STEP: Saw pod success Mar 26 00:08:15.668: INFO: Pod "downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0" satisfied condition "Succeeded or Failed" Mar 26 00:08:15.670: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0 container client-container: STEP: delete the pod Mar 26 00:08:15.688: INFO: Waiting for pod downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0 to disappear Mar 26 00:08:15.705: INFO: Pod downwardapi-volume-9ab988ae-2621-4d4a-9438-14cd11fe31b0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:15.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7753" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2099,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:15.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:08:16.294: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:08:18.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778096, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778096, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778096, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778096, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:08:21.371: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:08:21.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1035-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:22.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9392" for this suite. STEP: Destroying namespace "webhook-9392-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.880 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":133,"skipped":2110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:22.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:08:38.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6937" for this suite. • [SLOW TEST:16.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":134,"skipped":2143,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:08:38.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-0b6488e7-b62a-48c5-8070-adffc41bad85 in namespace container-probe-3737 Mar 26 00:08:42.920: INFO: Started pod busybox-0b6488e7-b62a-48c5-8070-adffc41bad85 in namespace container-probe-3737 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 00:08:42.923: INFO: Initial restart count of pod busybox-0b6488e7-b62a-48c5-8070-adffc41bad85 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:12:43.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3737" for this suite. • [SLOW TEST:244.798 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2165,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:12:43.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:12:43.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1" in namespace "projected-9095" to be "Succeeded or Failed" Mar 26 00:12:43.740: INFO: Pod "downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.152155ms Mar 26 00:12:45.759: INFO: Pod "downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022081383s Mar 26 00:12:47.771: INFO: Pod "downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033749574s STEP: Saw pod success Mar 26 00:12:47.771: INFO: Pod "downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1" satisfied condition "Succeeded or Failed" Mar 26 00:12:47.773: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1 container client-container: STEP: delete the pod Mar 26 00:12:47.802: INFO: Waiting for pod downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1 to disappear Mar 26 00:12:47.818: INFO: Pod downwardapi-volume-34c9ae75-2fcf-4f7a-8ea0-a8c10e7cb4c1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:12:47.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9095" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:12:47.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Mar 26 00:12:47.886: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:12:47.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3819" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":137,"skipped":2258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:12:47.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 26 00:12:48.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7291' Mar 26 00:12:48.132: INFO: stderr: "" Mar 26 00:12:48.132: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 26 00:12:53.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7291 -o json' Mar 26 00:12:53.272: INFO: stderr: "" Mar 26 00:12:53.272: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-26T00:12:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7291\",\n \"resourceVersion\": \"2810820\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7291/pods/e2e-test-httpd-pod\",\n \"uid\": \"dd2ae4ef-3b32-4170-a474-719be55a416e\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rh6sm\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rh6sm\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rh6sm\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T00:12:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T00:12:50Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T00:12:50Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T00:12:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e5fc17605065caea01ad9bb026f8efea3aa7aa75dc3487763fdcbe393bf263fc\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-26T00:12:50Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.20\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.20\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-26T00:12:48Z\"\n }\n}\n" STEP: replace the image in the pod Mar 26 00:12:53.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7291' Mar 26 00:12:53.553: INFO: stderr: "" Mar 26 00:12:53.553: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 26 00:12:53.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7291' Mar 26 00:13:03.005: INFO: stderr: "" Mar 26 00:13:03.005: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:13:03.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7291" for this suite. • [SLOW TEST:15.025 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":138,"skipped":2309,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:13:03.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:13:33.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1898" for this suite. • [SLOW TEST:30.501 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2323,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:13:33.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-b66b485e-ea0f-46fa-bd25-c3fd1f4b09c7 STEP: Creating a pod to test consume configMaps Mar 26 00:13:33.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1" in namespace "configmap-1565" to be "Succeeded or Failed" Mar 26 00:13:33.579: INFO: Pod "pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228033ms Mar 26 00:13:35.583: INFO: Pod "pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007305033s Mar 26 00:13:37.587: INFO: Pod "pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011676422s STEP: Saw pod success Mar 26 00:13:37.587: INFO: Pod "pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1" satisfied condition "Succeeded or Failed" Mar 26 00:13:37.590: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1 container configmap-volume-test: STEP: delete the pod Mar 26 00:13:37.625: INFO: Waiting for pod pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1 to disappear Mar 26 00:13:37.647: INFO: Pod pod-configmaps-7c412f6a-d543-48f7-948b-365897aa37f1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:13:37.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1565" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2342,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:13:37.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3725.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3725.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3725.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3725.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3725.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3725.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 00:13:43.796: INFO: DNS probes using dns-3725/dns-test-76b31740-2f8a-4d44-bd42-7678444f6f9b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:13:43.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3725" for this suite. • [SLOW TEST:6.293 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":141,"skipped":2345,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:13:43.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Mar 26 00:13:44.112: INFO: Waiting up to 5m0s for pod "var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6" in namespace "var-expansion-5450" to be "Succeeded or Failed" Mar 26 00:13:44.206: INFO: Pod "var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6": Phase="Pending", Reason="", readiness=false. Elapsed: 93.591972ms Mar 26 00:13:46.295: INFO: Pod "var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183050159s Mar 26 00:13:48.360: INFO: Pod "var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.248488637s STEP: Saw pod success Mar 26 00:13:48.361: INFO: Pod "var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6" satisfied condition "Succeeded or Failed" Mar 26 00:13:48.415: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6 container dapi-container: STEP: delete the pod Mar 26 00:13:48.443: INFO: Waiting for pod var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6 to disappear Mar 26 00:13:48.453: INFO: Pod var-expansion-e092d010-5dae-4e50-9b48-4b7c608585b6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:13:48.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5450" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2351,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:13:48.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6949 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6949 STEP: creating replication controller externalsvc in namespace services-6949 I0326 00:13:48.672555 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6949, replica count: 2 I0326 00:13:51.723025 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 00:13:54.723329 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 26 00:13:54.758: INFO: Creating new exec pod Mar 26 00:13:58.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6949 execpodmxscp -- /bin/sh -x -c nslookup clusterip-service' Mar 26 00:13:59.013: INFO: stderr: "I0326 00:13:58.913801 1460 log.go:172] (0xc000a509a0) (0xc00083d540) Create stream\nI0326 00:13:58.913869 1460 log.go:172] (0xc000a509a0) (0xc00083d540) Stream added, broadcasting: 1\nI0326 00:13:58.916277 1460 log.go:172] (0xc000a509a0) Reply frame received for 1\nI0326 00:13:58.916308 1460 log.go:172] (0xc000a509a0) (0xc0006995e0) Create stream\nI0326 00:13:58.916315 1460 log.go:172] (0xc000a509a0) (0xc0006995e0) Stream added, broadcasting: 3\nI0326 00:13:58.917310 1460 log.go:172] (0xc000a509a0) Reply frame received for 3\nI0326 00:13:58.917376 1460 log.go:172] (0xc000a509a0) (0xc000516000) Create stream\nI0326 00:13:58.917394 1460 log.go:172] (0xc000a509a0) (0xc000516000) Stream added, broadcasting: 5\nI0326 00:13:58.918497 1460 log.go:172] (0xc000a509a0) Reply frame received for 5\nI0326 00:13:58.999885 1460 log.go:172] (0xc000a509a0) Data frame received for 5\nI0326 00:13:58.999915 1460 log.go:172] (0xc000516000) (5) Data frame handling\nI0326 00:13:58.999936 1460 log.go:172] (0xc000516000) (5) Data frame sent\n+ nslookup clusterip-service\nI0326 00:13:59.005067 1460 log.go:172] (0xc000a509a0) Data frame received for 3\nI0326 00:13:59.005090 1460 log.go:172] (0xc0006995e0) (3) Data frame handling\nI0326 00:13:59.005244 1460 log.go:172] (0xc0006995e0) (3) Data frame sent\nI0326 00:13:59.006211 1460 log.go:172] (0xc000a509a0) Data frame received for 3\nI0326 00:13:59.006240 1460 log.go:172] (0xc0006995e0) (3) Data frame handling\nI0326 00:13:59.006261 1460 log.go:172] (0xc0006995e0) (3) Data frame sent\nI0326 00:13:59.006575 1460 log.go:172] (0xc000a509a0) Data frame received for 3\nI0326 00:13:59.006605 1460 log.go:172] (0xc0006995e0) (3) Data frame handling\nI0326 00:13:59.006671 1460 log.go:172] (0xc000a509a0) Data frame received for 5\nI0326 00:13:59.006686 1460 log.go:172] (0xc000516000) (5) Data frame handling\nI0326 00:13:59.008942 1460 log.go:172] (0xc000a509a0) Data frame received for 1\nI0326 00:13:59.008964 1460 log.go:172] (0xc00083d540) (1) Data frame handling\nI0326 00:13:59.008979 1460 log.go:172] (0xc00083d540) (1) Data frame sent\nI0326 00:13:59.009083 1460 log.go:172] (0xc000a509a0) (0xc00083d540) Stream removed, broadcasting: 1\nI0326 00:13:59.009530 1460 log.go:172] (0xc000a509a0) (0xc00083d540) Stream removed, broadcasting: 1\nI0326 00:13:59.009553 1460 log.go:172] (0xc000a509a0) (0xc0006995e0) Stream removed, broadcasting: 3\nI0326 00:13:59.009705 1460 log.go:172] (0xc000a509a0) Go away received\nI0326 00:13:59.009743 1460 log.go:172] (0xc000a509a0) (0xc000516000) Stream removed, broadcasting: 5\n" Mar 26 00:13:59.013: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6949.svc.cluster.local\tcanonical name = externalsvc.services-6949.svc.cluster.local.\nName:\texternalsvc.services-6949.svc.cluster.local\nAddress: 10.96.147.239\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6949, will wait for the garbage collector to delete the pods Mar 26 00:13:59.073: INFO: Deleting ReplicationController externalsvc took: 5.96291ms Mar 26 00:13:59.373: INFO: Terminating ReplicationController externalsvc pods took: 300.274906ms Mar 26 00:14:13.109: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:14:13.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6949" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:24.682 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":143,"skipped":2353,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:14:13.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 26 00:14:21.336: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 00:14:21.340: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 00:14:23.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 00:14:23.344: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 00:14:25.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 00:14:25.345: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 00:14:27.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 00:14:27.344: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 00:14:29.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 00:14:29.345: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 00:14:31.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 00:14:31.346: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 00:14:33.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 00:14:33.345: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:14:33.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-762" for this suite. • [SLOW TEST:20.231 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:14:33.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-5912 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5912 STEP: Deleting pre-stop pod Mar 26 00:14:46.488: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:14:46.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5912" for this suite. • [SLOW TEST:13.202 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":145,"skipped":2415,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:14:46.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:14:46.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9848" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":146,"skipped":2425,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:14:46.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:14:46.980: INFO: Creating deployment "webserver-deployment" Mar 26 00:14:46.988: INFO: Waiting for observed generation 1 Mar 26 00:14:49.050: INFO: Waiting for all required pods to come up Mar 26 00:14:49.054: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 26 00:14:57.079: INFO: Waiting for deployment "webserver-deployment" to complete Mar 26 00:14:57.087: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 26 00:14:57.092: INFO: Updating deployment webserver-deployment Mar 26 00:14:57.092: INFO: Waiting for observed generation 2 Mar 26 00:14:59.100: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 26 00:14:59.102: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 26 00:14:59.105: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 26 00:14:59.113: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 26 00:14:59.113: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 26 00:14:59.115: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 26 00:14:59.119: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 26 00:14:59.120: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 26 00:14:59.126: INFO: Updating deployment webserver-deployment Mar 26 00:14:59.126: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 26 00:14:59.165: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 26 00:14:59.188: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 26 00:15:02.219: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8590 /apis/apps/v1/namespaces/deployment-8590/deployments/webserver-deployment 68d004d9-b72f-458c-9a62-ca3b8995a731 2811839 3 2020-03-26 00:14:46 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002de5c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-26 00:14:59 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-26 00:14:59 +0000 UTC,LastTransitionTime:2020-03-26 00:14:46 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 26 00:15:02.915: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8590 /apis/apps/v1/namespaces/deployment-8590/replicasets/webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 2811837 3 2020-03-26 00:14:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 68d004d9-b72f-458c-9a62-ca3b8995a731 0xc0044df6d7 0xc0044df6d8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044df748 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 26 00:15:02.915: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 26 00:15:02.915: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8590 /apis/apps/v1/namespaces/deployment-8590/replicasets/webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 2811814 3 2020-03-26 00:14:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 68d004d9-b72f-458c-9a62-ca3b8995a731 0xc0044df617 0xc0044df618}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044df678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 26 00:15:03.014: INFO: Pod "webserver-deployment-595b5b9587-46wct" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-46wct webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-46wct 4485c25a-7f7f-4e3d-84eb-80cd73f670ae 2811886 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc0044dfc97 0xc0044dfc98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.014: INFO: Pod "webserver-deployment-595b5b9587-69p5z" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-69p5z webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-69p5z 3ad33810-9ff6-480b-8501-2167335e33d6 2811648 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc0044dfdf7 0xc0044dfdf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.134,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0fde60b6c8136822a5773a274b7331d129a3e223ff1866f79f20a2da94311da8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.014: INFO: Pod "webserver-deployment-595b5b9587-78xhn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-78xhn webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-78xhn 963a7912-a8e8-4c87-8983-32f5ceae475a 2811853 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc0044dff77 0xc0044dff78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.015: INFO: Pod "webserver-deployment-595b5b9587-7r4xz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7r4xz webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-7r4xz d2f299b3-0922-4cac-bfc4-0fd9e4cb2cad 2811674 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441a0e7 0xc00441a0e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.137,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a0f2c82ae63ffb49f8bd130c68fdd8763471c57367594fec189bbfe6f383ba65,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.015: INFO: Pod "webserver-deployment-595b5b9587-8ln8n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8ln8n webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-8ln8n 8f2c5812-ea54-47e2-8599-c351ade3662d 2811831 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441a267 0xc00441a268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.015: INFO: Pod "webserver-deployment-595b5b9587-9bgdb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9bgdb webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-9bgdb 2ff265e1-f0e7-4172-8bab-744a8e824022 2811896 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441a3c7 0xc00441a3c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.015: INFO: Pod "webserver-deployment-595b5b9587-9ldlk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9ldlk webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-9ldlk a9967e6a-2601-42fd-add0-c5a787721b29 2811843 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441a527 0xc00441a528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.015: INFO: Pod "webserver-deployment-595b5b9587-9xbtg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9xbtg webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-9xbtg d9651368-8370-4374-a724-6773c3579d58 2811878 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441a697 0xc00441a698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.015: INFO: Pod "webserver-deployment-595b5b9587-gch9x" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gch9x webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-gch9x 829ad66b-715d-4e35-a3a7-80609c12f159 2811643 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441a7f7 0xc00441a7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.29,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1009f4ca04db2c943f3c261bb1b16861aae8a3664ac5fa37287ada5c57057a05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.016: INFO: Pod "webserver-deployment-595b5b9587-jbcrr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jbcrr webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-jbcrr 825596a2-d13f-481a-993d-481dea7a3d8b 2811670 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441a997 0xc00441a998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.136,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d94141da9edde706e08d46849d880f82bae7824e6ff6fbbdd1d388d25b3161f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.016: INFO: Pod "webserver-deployment-595b5b9587-kljhf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kljhf webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-kljhf 1e6bbb59-5f63-43ea-8526-6f401e55cb9d 2811605 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441ab17 0xc00441ab18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.28,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://afb1ab7953ee3b2c3d5022fdb50f93e9ade90f0bad968f432c165e02560e2056,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.016: INFO: Pod "webserver-deployment-595b5b9587-l6vqb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l6vqb webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-l6vqb 400fa81a-5166-44ff-b8ce-01b410b19913 2811683 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441aca7 0xc00441aca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.32,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://611dfd29338f39621f8c58f042eb05103f10277af0bc0cd6614729e062817955,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.016: INFO: Pod "webserver-deployment-595b5b9587-qjcq4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qjcq4 webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-qjcq4 36234043-6885-4125-b376-767aa32e65c0 2811813 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441ae27 0xc00441ae28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.016: INFO: Pod "webserver-deployment-595b5b9587-qrzm2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qrzm2 webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-qrzm2 ea040e6a-dad5-4140-9393-d93ca6ebf4c3 2811851 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441af87 0xc00441af88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.017: INFO: Pod "webserver-deployment-595b5b9587-t25ns" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t25ns webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-t25ns df5e3e91-9719-4a10-83bd-ed0f7ca011e2 2811879 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441b0e7 0xc00441b0e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.017: INFO: Pod "webserver-deployment-595b5b9587-tbmm2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tbmm2 webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-tbmm2 715990c7-99c8-4bc7-852d-56375c2417af 2811826 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441b247 0xc00441b248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.017: INFO: Pod "webserver-deployment-595b5b9587-vj44r" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vj44r webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-vj44r f680d4b0-c26e-48c5-a5e1-dff744e46af6 2811656 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441b3a7 0xc00441b3a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.31,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8d8d49b7c46f2432e5ce46a51690329399df8e15c6e1420c3c99a1e80bb17153,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.017: INFO: Pod "webserver-deployment-595b5b9587-xzn8g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xzn8g webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-xzn8g c9512296-6d2f-4c2e-af9b-a55124fa59c0 2811888 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441b527 0xc00441b528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.017: INFO: Pod "webserver-deployment-595b5b9587-zw4lp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zw4lp webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-zw4lp 4d24448c-ba6a-439c-92ac-8ececff98c8a 2811641 0 2020-03-26 00:14:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441b6a7 0xc00441b6a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.135,StartTime:2020-03-26 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:14:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e86440798ecce26d71f326440258f3c6ae550dacb65b7bbe676cbf9dfd31181e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.018: INFO: Pod "webserver-deployment-595b5b9587-zwzcl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zwzcl webserver-deployment-595b5b9587- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-595b5b9587-zwzcl 280b0ea4-3169-4b0b-baaa-55158c31c345 2811845 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929bc68f-263e-4d22-a6b8-7b52834c9f27 0xc00441b827 0xc00441b828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.018: INFO: Pod "webserver-deployment-c7997dcc8-4dbbg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4dbbg webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-4dbbg cac47f33-a995-440a-9ca2-b8c38bfef423 2811902 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc00441b987 0xc00441b988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.018: INFO: Pod "webserver-deployment-c7997dcc8-8vhhz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8vhhz webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-8vhhz 061277b5-6a0a-41e7-8b17-90c7c7b2f160 2811861 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc00441bb07 0xc00441bb08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.018: INFO: Pod "webserver-deployment-c7997dcc8-bq5qt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bq5qt webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-bq5qt 2fd95e36-652f-4499-9dd7-5e421b5d6225 2811833 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc00441bc87 0xc00441bc88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.018: INFO: Pod "webserver-deployment-c7997dcc8-c889q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c889q webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-c889q 4f4837c6-ecda-4e11-8d7c-22d1e7a3fb83 2811717 0 2020-03-26 00:14:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc00441be37 0xc00441be38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.018: INFO: Pod "webserver-deployment-c7997dcc8-fgd2v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fgd2v webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-fgd2v 88991636-56fb-4d35-9176-6fa34433eef3 2811898 0 2020-03-26 00:14:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc00441bfd7 0xc00441bfd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.140,StartTime:2020-03-26 00:14:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.019: INFO: Pod "webserver-deployment-c7997dcc8-jbkcg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jbkcg webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-jbkcg d8f77176-9497-4b05-8735-eeed8f874d04 2811864 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc000f864c7 0xc000f864c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.019: INFO: Pod "webserver-deployment-c7997dcc8-mdb7b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mdb7b webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-mdb7b dd516f50-f630-418a-8987-110e83b78ad0 2811863 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc000f86aa7 0xc000f86aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.019: INFO: Pod "webserver-deployment-c7997dcc8-n6rqw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n6rqw webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-n6rqw 99f66db2-d778-44af-976b-ff3ecd2cc6f9 2811739 0 2020-03-26 00:14:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc000f872f7 0xc000f872f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.019: INFO: Pod "webserver-deployment-c7997dcc8-ptm5c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ptm5c webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-ptm5c ec4b91d5-a41d-49dc-a95c-373085518dbe 2811840 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc000f879b7 0xc000f879b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.019: INFO: Pod "webserver-deployment-c7997dcc8-q5xkh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q5xkh webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-q5xkh f3ad819f-2d9e-435d-92ac-f3ac965642f5 2811841 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc000f87da7 0xc000f87da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.019: INFO: Pod "webserver-deployment-c7997dcc8-rr9gd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rr9gd webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-rr9gd fca36ec0-be0d-49c9-a5b5-171580487b6b 2811860 0 2020-03-26 00:14:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc000622537 0xc000622538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-26 00:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.020: INFO: Pod "webserver-deployment-c7997dcc8-rsfv2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rsfv2 webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-rsfv2 f09e3c45-1a0d-441f-86a8-3cef4c710322 2811893 0 2020-03-26 00:14:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc0006238d7 0xc0006238d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.139,StartTime:2020-03-26 00:14:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:15:03.020: INFO: Pod "webserver-deployment-c7997dcc8-xm724" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xm724 webserver-deployment-c7997dcc8- deployment-8590 /api/v1/namespaces/deployment-8590/pods/webserver-deployment-c7997dcc8-xm724 c9a6319c-6f90-4e1a-b9cc-6f730ee33c78 2811740 0 2020-03-26 00:14:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fa3574f-c03b-410f-8fcf-6e9baac92dd1 0xc004326187 0xc004326188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6xgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6xgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6xgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:14:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-26 00:14:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:15:03.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8590" for this suite. • [SLOW TEST:16.564 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":147,"skipped":2440,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:15:03.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1914 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1914 I0326 00:15:04.343098 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1914, replica count: 2 I0326 00:15:07.393495 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 00:15:10.393726 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 00:15:13.394009 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 00:15:16.394292 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 26 00:15:16.394: INFO: Creating new exec pod Mar 26 00:15:23.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1914 execpodbgktj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 26 00:15:27.927: INFO: stderr: "I0326 00:15:27.839072 1481 log.go:172] (0xc0007d4b00) (0xc0006f1680) Create stream\nI0326 00:15:27.839114 1481 log.go:172] (0xc0007d4b00) (0xc0006f1680) Stream added, broadcasting: 1\nI0326 00:15:27.842926 1481 log.go:172] (0xc0007d4b00) Reply frame received for 1\nI0326 00:15:27.842983 1481 log.go:172] (0xc0007d4b00) (0xc000bb40a0) Create stream\nI0326 00:15:27.842999 1481 log.go:172] (0xc0007d4b00) (0xc000bb40a0) Stream added, broadcasting: 3\nI0326 00:15:27.844102 1481 log.go:172] (0xc0007d4b00) Reply frame received for 3\nI0326 00:15:27.844143 1481 log.go:172] (0xc0007d4b00) (0xc0008ca0a0) Create stream\nI0326 00:15:27.844164 1481 log.go:172] (0xc0007d4b00) (0xc0008ca0a0) Stream added, broadcasting: 5\nI0326 00:15:27.845449 1481 log.go:172] (0xc0007d4b00) Reply frame received for 5\nI0326 00:15:27.917790 1481 log.go:172] (0xc0007d4b00) Data frame received for 5\nI0326 00:15:27.917819 1481 log.go:172] (0xc0008ca0a0) (5) Data frame handling\nI0326 00:15:27.917841 1481 log.go:172] (0xc0008ca0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0326 00:15:27.918431 1481 log.go:172] (0xc0007d4b00) Data frame received for 5\nI0326 00:15:27.918453 1481 log.go:172] (0xc0008ca0a0) (5) Data frame handling\nI0326 00:15:27.918470 1481 log.go:172] (0xc0008ca0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0326 00:15:27.918957 1481 log.go:172] (0xc0007d4b00) Data frame received for 5\nI0326 00:15:27.918983 1481 log.go:172] (0xc0008ca0a0) (5) Data frame handling\nI0326 00:15:27.919098 1481 log.go:172] (0xc0007d4b00) Data frame received for 3\nI0326 00:15:27.919152 1481 log.go:172] (0xc000bb40a0) (3) Data frame handling\nI0326 00:15:27.921685 1481 log.go:172] (0xc0007d4b00) Data frame received for 1\nI0326 00:15:27.921717 1481 log.go:172] (0xc0006f1680) (1) Data frame handling\nI0326 00:15:27.921736 1481 log.go:172] (0xc0006f1680) (1) Data frame sent\nI0326 00:15:27.921756 1481 log.go:172] (0xc0007d4b00) (0xc0006f1680) Stream removed, broadcasting: 1\nI0326 00:15:27.921902 1481 log.go:172] (0xc0007d4b00) Go away received\nI0326 00:15:27.922195 1481 log.go:172] (0xc0007d4b00) (0xc0006f1680) Stream removed, broadcasting: 1\nI0326 00:15:27.922215 1481 log.go:172] (0xc0007d4b00) (0xc000bb40a0) Stream removed, broadcasting: 3\nI0326 00:15:27.922226 1481 log.go:172] (0xc0007d4b00) (0xc0008ca0a0) Stream removed, broadcasting: 5\n" Mar 26 00:15:27.927: INFO: stdout: "" Mar 26 00:15:27.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1914 execpodbgktj -- /bin/sh -x -c nc -zv -t -w 2 10.96.229.229 80' Mar 26 00:15:28.121: INFO: stderr: "I0326 00:15:28.040412 1516 log.go:172] (0xc000a4e8f0) (0xc00066d5e0) Create stream\nI0326 00:15:28.040458 1516 log.go:172] (0xc000a4e8f0) (0xc00066d5e0) Stream added, broadcasting: 1\nI0326 00:15:28.042957 1516 log.go:172] (0xc000a4e8f0) Reply frame received for 1\nI0326 00:15:28.043000 1516 log.go:172] (0xc000a4e8f0) (0xc00066d680) Create stream\nI0326 00:15:28.043009 1516 log.go:172] (0xc000a4e8f0) (0xc00066d680) Stream added, broadcasting: 3\nI0326 00:15:28.044011 1516 log.go:172] (0xc000a4e8f0) Reply frame received for 3\nI0326 00:15:28.044045 1516 log.go:172] (0xc000a4e8f0) (0xc000a78000) Create stream\nI0326 00:15:28.044057 1516 log.go:172] (0xc000a4e8f0) (0xc000a78000) Stream added, broadcasting: 5\nI0326 00:15:28.044865 1516 log.go:172] (0xc000a4e8f0) Reply frame received for 5\nI0326 00:15:28.114029 1516 log.go:172] (0xc000a4e8f0) Data frame received for 5\nI0326 00:15:28.114104 1516 log.go:172] (0xc000a78000) (5) Data frame handling\nI0326 00:15:28.114139 1516 log.go:172] (0xc000a78000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.229.229 80\nConnection to 10.96.229.229 80 port [tcp/http] succeeded!\nI0326 00:15:28.114177 1516 log.go:172] (0xc000a4e8f0) Data frame received for 3\nI0326 00:15:28.114225 1516 log.go:172] (0xc00066d680) (3) Data frame handling\nI0326 00:15:28.114254 1516 log.go:172] (0xc000a4e8f0) Data frame received for 5\nI0326 00:15:28.114271 1516 log.go:172] (0xc000a78000) (5) Data frame handling\nI0326 00:15:28.115954 1516 log.go:172] (0xc000a4e8f0) Data frame received for 1\nI0326 00:15:28.115986 1516 log.go:172] (0xc00066d5e0) (1) Data frame handling\nI0326 00:15:28.116015 1516 log.go:172] (0xc00066d5e0) (1) Data frame sent\nI0326 00:15:28.116039 1516 log.go:172] (0xc000a4e8f0) (0xc00066d5e0) Stream removed, broadcasting: 1\nI0326 00:15:28.116184 1516 log.go:172] (0xc000a4e8f0) Go away received\nI0326 00:15:28.116496 1516 log.go:172] (0xc000a4e8f0) (0xc00066d5e0) Stream removed, broadcasting: 1\nI0326 00:15:28.116518 1516 log.go:172] (0xc000a4e8f0) (0xc00066d680) Stream removed, broadcasting: 3\nI0326 00:15:28.116531 1516 log.go:172] (0xc000a4e8f0) (0xc000a78000) Stream removed, broadcasting: 5\n" Mar 26 00:15:28.121: INFO: stdout: "" Mar 26 00:15:28.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1914 execpodbgktj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31665' Mar 26 00:15:28.316: INFO: stderr: "I0326 00:15:28.240944 1538 log.go:172] (0xc00003a840) (0xc0006d7540) Create stream\nI0326 00:15:28.241003 1538 log.go:172] (0xc00003a840) (0xc0006d7540) Stream added, broadcasting: 1\nI0326 00:15:28.244573 1538 log.go:172] (0xc00003a840) Reply frame received for 1\nI0326 00:15:28.244660 1538 log.go:172] (0xc00003a840) (0xc00053ca00) Create stream\nI0326 00:15:28.244687 1538 log.go:172] (0xc00003a840) (0xc00053ca00) Stream added, broadcasting: 3\nI0326 00:15:28.246017 1538 log.go:172] (0xc00003a840) Reply frame received for 3\nI0326 00:15:28.246070 1538 log.go:172] (0xc00003a840) (0xc000430000) Create stream\nI0326 00:15:28.246088 1538 log.go:172] (0xc00003a840) (0xc000430000) Stream added, broadcasting: 5\nI0326 00:15:28.247425 1538 log.go:172] (0xc00003a840) Reply frame received for 5\nI0326 00:15:28.309930 1538 log.go:172] (0xc00003a840) Data frame received for 3\nI0326 00:15:28.309967 1538 log.go:172] (0xc00053ca00) (3) Data frame handling\nI0326 00:15:28.309990 1538 log.go:172] (0xc00003a840) Data frame received for 5\nI0326 00:15:28.309998 1538 log.go:172] (0xc000430000) (5) Data frame handling\nI0326 00:15:28.310007 1538 log.go:172] (0xc000430000) (5) Data frame sent\nI0326 00:15:28.310018 1538 log.go:172] (0xc00003a840) Data frame received for 5\nI0326 00:15:28.310031 1538 log.go:172] (0xc000430000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31665\nConnection to 172.17.0.13 31665 port [tcp/31665] succeeded!\nI0326 00:15:28.311881 1538 log.go:172] (0xc00003a840) Data frame received for 1\nI0326 00:15:28.311907 1538 log.go:172] (0xc0006d7540) (1) Data frame handling\nI0326 00:15:28.311924 1538 log.go:172] (0xc0006d7540) (1) Data frame sent\nI0326 00:15:28.311940 1538 log.go:172] (0xc00003a840) (0xc0006d7540) Stream removed, broadcasting: 1\nI0326 00:15:28.311990 1538 log.go:172] (0xc00003a840) Go away received\nI0326 00:15:28.312299 1538 log.go:172] (0xc00003a840) (0xc0006d7540) Stream removed, broadcasting: 1\nI0326 00:15:28.312318 1538 log.go:172] (0xc00003a840) (0xc00053ca00) Stream removed, broadcasting: 3\nI0326 00:15:28.312337 1538 log.go:172] (0xc00003a840) (0xc000430000) Stream removed, broadcasting: 5\n" Mar 26 00:15:28.316: INFO: stdout: "" Mar 26 00:15:28.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1914 execpodbgktj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31665' Mar 26 00:15:28.556: INFO: stderr: "I0326 00:15:28.456705 1560 log.go:172] (0xc000b50370) (0xc0007ff5e0) Create stream\nI0326 00:15:28.456749 1560 log.go:172] (0xc000b50370) (0xc0007ff5e0) Stream added, broadcasting: 1\nI0326 00:15:28.459388 1560 log.go:172] (0xc000b50370) Reply frame received for 1\nI0326 00:15:28.459459 1560 log.go:172] (0xc000b50370) (0xc0003b6aa0) Create stream\nI0326 00:15:28.459485 1560 log.go:172] (0xc000b50370) (0xc0003b6aa0) Stream added, broadcasting: 3\nI0326 00:15:28.460420 1560 log.go:172] (0xc000b50370) Reply frame received for 3\nI0326 00:15:28.460468 1560 log.go:172] (0xc000b50370) (0xc0003b6b40) Create stream\nI0326 00:15:28.460479 1560 log.go:172] (0xc000b50370) (0xc0003b6b40) Stream added, broadcasting: 5\nI0326 00:15:28.461652 1560 log.go:172] (0xc000b50370) Reply frame received for 5\nI0326 00:15:28.547625 1560 log.go:172] (0xc000b50370) Data frame received for 3\nI0326 00:15:28.547672 1560 log.go:172] (0xc0003b6aa0) (3) Data frame handling\nI0326 00:15:28.547698 1560 log.go:172] (0xc000b50370) Data frame received for 5\nI0326 00:15:28.547712 1560 log.go:172] (0xc0003b6b40) (5) Data frame handling\nI0326 00:15:28.547734 1560 log.go:172] (0xc0003b6b40) (5) Data frame sent\nI0326 00:15:28.547750 1560 log.go:172] (0xc000b50370) Data frame received for 5\nI0326 00:15:28.547760 1560 log.go:172] (0xc0003b6b40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31665\nConnection to 172.17.0.12 31665 port [tcp/31665] succeeded!\nI0326 00:15:28.549944 1560 log.go:172] (0xc000b50370) Data frame received for 1\nI0326 00:15:28.549958 1560 log.go:172] (0xc0007ff5e0) (1) Data frame handling\nI0326 00:15:28.549965 1560 log.go:172] (0xc0007ff5e0) (1) Data frame sent\nI0326 00:15:28.549972 1560 log.go:172] (0xc000b50370) (0xc0007ff5e0) Stream removed, broadcasting: 1\nI0326 00:15:28.550011 1560 log.go:172] (0xc000b50370) Go away received\nI0326 00:15:28.550344 1560 log.go:172] (0xc000b50370) (0xc0007ff5e0) Stream removed, broadcasting: 1\nI0326 00:15:28.550356 1560 log.go:172] (0xc000b50370) (0xc0003b6aa0) Stream removed, broadcasting: 3\nI0326 00:15:28.550366 1560 log.go:172] (0xc000b50370) (0xc0003b6b40) Stream removed, broadcasting: 5\n" Mar 26 00:15:28.556: INFO: stdout: "" Mar 26 00:15:28.556: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:15:28.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1914" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:25.127 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":148,"skipped":2444,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:15:28.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-e40d0610-0dde-41f2-bf95-00000b50a89c STEP: Creating secret with name s-test-opt-upd-876ba640-1bff-48e3-b621-d3610f936ddb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e40d0610-0dde-41f2-bf95-00000b50a89c STEP: Updating secret s-test-opt-upd-876ba640-1bff-48e3-b621-d3610f936ddb STEP: Creating secret with name s-test-opt-create-cd25a091-435f-4747-923c-0527185b17e4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:17:01.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5051" for this suite. • [SLOW TEST:92.706 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2451,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:17:01.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 26 00:17:04.418: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:17:04.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3990" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2453,"failed":0} SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:17:04.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-6cbc7dcd-37ae-44eb-b0c2-13fbc6d9a68d STEP: Creating secret with name secret-projected-all-test-volume-59925cbf-ef4c-45f6-a575-2bb2bc4044b3 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 26 00:17:04.578: INFO: Waiting up to 5m0s for pod "projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165" in namespace "projected-4369" to be "Succeeded or Failed" Mar 26 00:17:04.581: INFO: Pod "projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330096ms Mar 26 00:17:06.603: INFO: Pod "projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024491067s Mar 26 00:17:08.607: INFO: Pod "projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028628151s STEP: Saw pod success Mar 26 00:17:08.607: INFO: Pod "projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165" satisfied condition "Succeeded or Failed" Mar 26 00:17:08.610: INFO: Trying to get logs from node latest-worker pod projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165 container projected-all-volume-test: STEP: delete the pod Mar 26 00:17:08.643: INFO: Waiting for pod projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165 to disappear Mar 26 00:17:08.647: INFO: Pod projected-volume-c3de8aff-39c0-4ece-8c5c-814328995165 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:17:08.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4369" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2457,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:17:08.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:17:09.048: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:17:11.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778629, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778629, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778629, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720778629, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:17:14.130: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:17:26.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9495" for this suite. STEP: Destroying namespace "webhook-9495-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.811 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":152,"skipped":2457,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:17:26.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-61a0defb-273f-4a19-9a3b-e291e11fae73 STEP: Creating a pod to test consume secrets Mar 26 00:17:26.620: INFO: Waiting up to 5m0s for pod "pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441" in namespace "secrets-1669" to be "Succeeded or Failed" Mar 26 00:17:26.624: INFO: Pod "pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441": Phase="Pending", Reason="", readiness=false. Elapsed: 3.620746ms Mar 26 00:17:28.628: INFO: Pod "pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007960872s Mar 26 00:17:30.633: INFO: Pod "pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012558904s STEP: Saw pod success Mar 26 00:17:30.633: INFO: Pod "pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441" satisfied condition "Succeeded or Failed" Mar 26 00:17:30.636: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441 container secret-volume-test: STEP: delete the pod Mar 26 00:17:30.672: INFO: Waiting for pod pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441 to disappear Mar 26 00:17:30.699: INFO: Pod pod-secrets-d731f850-8b9b-41f0-8cb5-f04fff509441 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:17:30.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1669" for this suite. STEP: Destroying namespace "secret-namespace-906" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:17:30.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 26 00:17:30.794: INFO: Waiting up to 5m0s for pod "downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d" in namespace "downward-api-3020" to be "Succeeded or Failed" Mar 26 00:17:30.867: INFO: Pod "downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 72.801098ms Mar 26 00:17:32.879: INFO: Pod "downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084529315s Mar 26 00:17:34.883: INFO: Pod "downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089030737s STEP: Saw pod success Mar 26 00:17:34.884: INFO: Pod "downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d" satisfied condition "Succeeded or Failed" Mar 26 00:17:34.886: INFO: Trying to get logs from node latest-worker pod downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d container dapi-container: STEP: delete the pod Mar 26 00:17:34.934: INFO: Waiting for pod downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d to disappear Mar 26 00:17:34.957: INFO: Pod downward-api-b51db1d4-be13-4668-aced-d7d0f9328e7d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:17:34.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3020" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2501,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:17:34.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4207 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 26 00:17:35.083: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 26 00:17:35.126: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 26 00:17:37.130: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 26 00:17:39.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:17:41.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:17:43.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:17:45.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:17:47.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:17:49.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:17:51.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:17:53.130: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 26 00:17:53.136: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 26 00:17:55.140: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 26 00:17:57.140: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 26 00:18:01.208: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.156 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4207 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 00:18:01.208: INFO: >>> kubeConfig: /root/.kube/config I0326 00:18:01.236303 7 log.go:172] (0xc002a8c630) (0xc000efed20) Create stream I0326 00:18:01.236331 7 log.go:172] (0xc002a8c630) (0xc000efed20) Stream added, broadcasting: 1 I0326 00:18:01.240323 7 log.go:172] (0xc002a8c630) Reply frame received for 1 I0326 00:18:01.240400 7 log.go:172] (0xc002a8c630) (0xc000efedc0) Create stream I0326 00:18:01.240441 7 log.go:172] (0xc002a8c630) (0xc000efedc0) Stream added, broadcasting: 3 I0326 00:18:01.244165 7 log.go:172] (0xc002a8c630) Reply frame received for 3 I0326 00:18:01.244202 7 log.go:172] (0xc002a8c630) (0xc0014b7180) Create stream I0326 00:18:01.244213 7 log.go:172] (0xc002a8c630) (0xc0014b7180) Stream added, broadcasting: 5 I0326 00:18:01.245250 7 log.go:172] (0xc002a8c630) Reply frame received for 5 I0326 00:18:02.323156 7 log.go:172] (0xc002a8c630) Data frame received for 3 I0326 00:18:02.323188 7 log.go:172] (0xc000efedc0) (3) Data frame handling I0326 00:18:02.323206 7 log.go:172] (0xc000efedc0) (3) Data frame sent I0326 00:18:02.323414 7 log.go:172] (0xc002a8c630) Data frame received for 3 I0326 00:18:02.323432 7 log.go:172] (0xc000efedc0) (3) Data frame handling I0326 00:18:02.323795 7 log.go:172] (0xc002a8c630) Data frame received for 5 I0326 00:18:02.323824 7 log.go:172] (0xc0014b7180) (5) Data frame handling I0326 00:18:02.325381 7 log.go:172] (0xc002a8c630) Data frame received for 1 I0326 00:18:02.325415 7 log.go:172] (0xc000efed20) (1) Data frame handling I0326 00:18:02.325446 7 log.go:172] (0xc000efed20) (1) Data frame sent I0326 00:18:02.325869 7 log.go:172] (0xc002a8c630) (0xc000efed20) Stream removed, broadcasting: 1 I0326 00:18:02.325923 7 log.go:172] (0xc002a8c630) Go away received I0326 00:18:02.326048 7 log.go:172] (0xc002a8c630) (0xc000efed20) Stream removed, broadcasting: 1 I0326 00:18:02.326095 7 log.go:172] (0xc002a8c630) (0xc000efedc0) Stream removed, broadcasting: 3 I0326 00:18:02.326123 7 log.go:172] (0xc002a8c630) (0xc0014b7180) Stream removed, broadcasting: 5 Mar 26 00:18:02.326: INFO: Found all expected endpoints: [netserver-0] Mar 26 00:18:02.328: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.50 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4207 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 00:18:02.329: INFO: >>> kubeConfig: /root/.kube/config I0326 00:18:02.363914 7 log.go:172] (0xc002a8cbb0) (0xc000eff220) Create stream I0326 00:18:02.363938 7 log.go:172] (0xc002a8cbb0) (0xc000eff220) Stream added, broadcasting: 1 I0326 00:18:02.365698 7 log.go:172] (0xc002a8cbb0) Reply frame received for 1 I0326 00:18:02.365737 7 log.go:172] (0xc002a8cbb0) (0xc001434be0) Create stream I0326 00:18:02.365778 7 log.go:172] (0xc002a8cbb0) (0xc001434be0) Stream added, broadcasting: 3 I0326 00:18:02.366739 7 log.go:172] (0xc002a8cbb0) Reply frame received for 3 I0326 00:18:02.366771 7 log.go:172] (0xc002a8cbb0) (0xc000eff400) Create stream I0326 00:18:02.366783 7 log.go:172] (0xc002a8cbb0) (0xc000eff400) Stream added, broadcasting: 5 I0326 00:18:02.367618 7 log.go:172] (0xc002a8cbb0) Reply frame received for 5 I0326 00:18:03.440508 7 log.go:172] (0xc002a8cbb0) Data frame received for 3 I0326 00:18:03.440549 7 log.go:172] (0xc001434be0) (3) Data frame handling I0326 00:18:03.440573 7 log.go:172] (0xc001434be0) (3) Data frame sent I0326 00:18:03.440586 7 log.go:172] (0xc002a8cbb0) Data frame received for 3 I0326 00:18:03.440600 7 log.go:172] (0xc001434be0) (3) Data frame handling I0326 00:18:03.440932 7 log.go:172] (0xc002a8cbb0) Data frame received for 5 I0326 00:18:03.440954 7 log.go:172] (0xc000eff400) (5) Data frame handling I0326 00:18:03.443054 7 log.go:172] (0xc002a8cbb0) Data frame received for 1 I0326 00:18:03.443075 7 log.go:172] (0xc000eff220) (1) Data frame handling I0326 00:18:03.443091 7 log.go:172] (0xc000eff220) (1) Data frame sent I0326 00:18:03.443100 7 log.go:172] (0xc002a8cbb0) (0xc000eff220) Stream removed, broadcasting: 1 I0326 00:18:03.443172 7 log.go:172] (0xc002a8cbb0) (0xc000eff220) Stream removed, broadcasting: 1 I0326 00:18:03.443187 7 log.go:172] (0xc002a8cbb0) (0xc001434be0) Stream removed, broadcasting: 3 I0326 00:18:03.443336 7 log.go:172] (0xc002a8cbb0) (0xc000eff400) Stream removed, broadcasting: 5 Mar 26 00:18:03.443: INFO: Found all expected endpoints: [netserver-1] I0326 00:18:03.443380 7 log.go:172] (0xc002a8cbb0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:18:03.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4207" for this suite. • [SLOW TEST:28.474 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:18:03.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:18:10.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2701" for this suite. • [SLOW TEST:7.132 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":156,"skipped":2541,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:18:10.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-8mv5 STEP: Creating a pod to test atomic-volume-subpath Mar 26 00:18:10.678: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8mv5" in namespace "subpath-2024" to be "Succeeded or Failed" Mar 26 00:18:10.681: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014934ms Mar 26 00:18:12.685: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006535021s Mar 26 00:18:14.689: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 4.010749181s Mar 26 00:18:16.693: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 6.014969549s Mar 26 00:18:18.698: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 8.019408347s Mar 26 00:18:20.702: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 10.023887536s Mar 26 00:18:22.706: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 12.028150766s Mar 26 00:18:24.710: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 14.032141808s Mar 26 00:18:26.715: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 16.036514469s Mar 26 00:18:28.719: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 18.040449304s Mar 26 00:18:30.723: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 20.044304644s Mar 26 00:18:32.736: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Running", Reason="", readiness=true. Elapsed: 22.057299164s Mar 26 00:18:34.739: INFO: Pod "pod-subpath-test-configmap-8mv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060620152s STEP: Saw pod success Mar 26 00:18:34.739: INFO: Pod "pod-subpath-test-configmap-8mv5" satisfied condition "Succeeded or Failed" Mar 26 00:18:34.741: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-8mv5 container test-container-subpath-configmap-8mv5: STEP: delete the pod Mar 26 00:18:34.774: INFO: Waiting for pod pod-subpath-test-configmap-8mv5 to disappear Mar 26 00:18:34.781: INFO: Pod pod-subpath-test-configmap-8mv5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-8mv5 Mar 26 00:18:34.781: INFO: Deleting pod "pod-subpath-test-configmap-8mv5" in namespace "subpath-2024" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:18:34.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2024" for this suite. • [SLOW TEST:24.204 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":157,"skipped":2550,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:18:34.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:18:50.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5985" for this suite. • [SLOW TEST:16.148 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":158,"skipped":2556,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:18:50.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-7b194267-d17c-451e-a1dc-3e974cbfa3b5 STEP: Creating a pod to test consume configMaps Mar 26 00:18:51.008: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde" in namespace "configmap-3491" to be "Succeeded or Failed" Mar 26 00:18:51.012: INFO: Pod "pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490832ms Mar 26 00:18:53.036: INFO: Pod "pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027461129s Mar 26 00:18:55.039: INFO: Pod "pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031158169s STEP: Saw pod success Mar 26 00:18:55.039: INFO: Pod "pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde" satisfied condition "Succeeded or Failed" Mar 26 00:18:55.042: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde container configmap-volume-test: STEP: delete the pod Mar 26 00:18:55.061: INFO: Waiting for pod pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde to disappear Mar 26 00:18:55.077: INFO: Pod pod-configmaps-a1913c9d-903d-4ef7-adfa-b3421992cdde no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:18:55.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3491" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2562,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:18:55.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 26 00:18:55.192: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 26 00:18:55.202: INFO: Waiting for terminating namespaces to be deleted... Mar 26 00:18:55.204: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 26 00:18:55.212: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:18:55.212: INFO: Container kube-proxy ready: true, restart count 0 Mar 26 00:18:55.212: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:18:55.212: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 00:18:55.212: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 26 00:18:55.217: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:18:55.217: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 00:18:55.217: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 00:18:55.217: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1cf1830f-6631-42d8-831c-9b1918aa889f 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-1cf1830f-6631-42d8-831c-9b1918aa889f off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-1cf1830f-6631-42d8-831c-9b1918aa889f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:24:03.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5425" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.287 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":160,"skipped":2575,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:24:03.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:24:09.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5614" for this suite. • [SLOW TEST:5.878 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":161,"skipped":2580,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:24:09.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Mar 26 00:24:09.332: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix957569681/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:24:09.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2118" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":162,"skipped":2594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:24:09.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 26 00:24:09.485: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:24:22.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2929" for this suite. • [SLOW TEST:13.337 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2676,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:24:22.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 26 00:24:30.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 26 00:24:30.903: INFO: Pod pod-with-poststart-exec-hook still exists Mar 26 00:24:32.903: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 26 00:24:32.939: INFO: Pod pod-with-poststart-exec-hook still exists Mar 26 00:24:34.904: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 26 00:24:34.908: INFO: Pod pod-with-poststart-exec-hook still exists Mar 26 00:24:36.904: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 26 00:24:36.908: INFO: Pod pod-with-poststart-exec-hook still exists Mar 26 00:24:38.904: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 26 00:24:38.908: INFO: Pod pod-with-poststart-exec-hook still exists Mar 26 00:24:40.904: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 26 00:24:40.908: INFO: Pod pod-with-poststart-exec-hook still exists Mar 26 00:24:42.904: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 26 00:24:42.908: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:24:42.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3595" for this suite. • [SLOW TEST:20.164 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2686,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:24:42.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:24:42.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 26 00:24:43.561: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T00:24:43Z generation:1 name:name1 resourceVersion:2814603 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cb3fbd4e-eef1-4230-bf15-00c4f1702c42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 26 00:24:53.566: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T00:24:53Z generation:1 name:name2 resourceVersion:2814649 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:774eed95-9ee5-4541-9d70-a2b7013295c3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 26 00:25:03.572: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T00:24:43Z generation:2 name:name1 resourceVersion:2814680 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cb3fbd4e-eef1-4230-bf15-00c4f1702c42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 26 00:25:13.577: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T00:24:53Z generation:2 name:name2 resourceVersion:2814710 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:774eed95-9ee5-4541-9d70-a2b7013295c3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 26 00:25:23.585: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T00:24:43Z generation:2 name:name1 resourceVersion:2814740 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cb3fbd4e-eef1-4230-bf15-00c4f1702c42] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 26 00:25:33.593: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T00:24:53Z generation:2 name:name2 resourceVersion:2814770 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:774eed95-9ee5-4541-9d70-a2b7013295c3] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:25:44.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-876" for this suite. • [SLOW TEST:61.194 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":165,"skipped":2709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:25:44.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:25:44.197: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:25:45.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8519" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":166,"skipped":2737,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:25:45.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 26 00:25:45.301: INFO: Waiting up to 5m0s for pod "downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848" in namespace "downward-api-9466" to be "Succeeded or Failed" Mar 26 00:25:45.320: INFO: Pod "downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848": Phase="Pending", Reason="", readiness=false. Elapsed: 19.61122ms Mar 26 00:25:47.324: INFO: Pod "downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023556357s Mar 26 00:25:49.327: INFO: Pod "downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026588815s STEP: Saw pod success Mar 26 00:25:49.327: INFO: Pod "downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848" satisfied condition "Succeeded or Failed" Mar 26 00:25:49.329: INFO: Trying to get logs from node latest-worker2 pod downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848 container dapi-container: STEP: delete the pod Mar 26 00:25:49.360: INFO: Waiting for pod downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848 to disappear Mar 26 00:25:49.376: INFO: Pod downward-api-0e386a80-8877-4ce1-9bfa-15466e48a848 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:25:49.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9466" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2743,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:25:49.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 26 00:25:49.470: INFO: Waiting up to 5m0s for pod "pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0" in namespace "emptydir-2366" to be "Succeeded or Failed" Mar 26 00:25:49.487: INFO: Pod "pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.833851ms Mar 26 00:25:51.490: INFO: Pod "pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02067388s Mar 26 00:25:53.508: INFO: Pod "pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038705082s STEP: Saw pod success Mar 26 00:25:53.508: INFO: Pod "pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0" satisfied condition "Succeeded or Failed" Mar 26 00:25:53.511: INFO: Trying to get logs from node latest-worker pod pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0 container test-container: STEP: delete the pod Mar 26 00:25:53.541: INFO: Waiting for pod pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0 to disappear Mar 26 00:25:53.546: INFO: Pod pod-5b413fd7-9659-45d4-83ea-3d8dcaa73cd0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:25:53.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2366" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2762,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:25:53.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:25:53.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439" in namespace "downward-api-411" to be "Succeeded or Failed" Mar 26 00:25:53.646: INFO: Pod "downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439": Phase="Pending", Reason="", readiness=false. Elapsed: 38.843956ms Mar 26 00:25:55.651: INFO: Pod "downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044200985s Mar 26 00:25:57.656: INFO: Pod "downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048658239s STEP: Saw pod success Mar 26 00:25:57.656: INFO: Pod "downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439" satisfied condition "Succeeded or Failed" Mar 26 00:25:57.659: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439 container client-container: STEP: delete the pod Mar 26 00:25:57.725: INFO: Waiting for pod downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439 to disappear Mar 26 00:25:57.731: INFO: Pod downwardapi-volume-e69d3112-2ffa-46cf-a8d2-59f7be700439 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:25:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-411" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2777,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:25:57.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 26 00:26:01.934: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:26:01.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4798" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2795,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:26:01.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:26:06.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8648" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2797,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:26:06.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:26:06.193: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 26 00:26:11.196: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 26 00:26:11.196: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 26 00:26:11.263: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-428 /apis/apps/v1/namespaces/deployment-428/deployments/test-cleanup-deployment 14157f23-fce7-4ba5-a9d5-f01c6441c836 2815039 1 2020-03-26 00:26:11 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038214b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 26 00:26:11.301: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-428 /apis/apps/v1/namespaces/deployment-428/replicasets/test-cleanup-deployment-577c77b589 8087ea2f-5273-4700-9ade-f03a54138bc6 2815041 1 2020-03-26 00:26:11 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 14157f23-fce7-4ba5-a9d5-f01c6441c836 0xc0043e7a27 0xc0043e7a28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043e7a98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 26 00:26:11.302: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 26 00:26:11.302: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-428 /apis/apps/v1/namespaces/deployment-428/replicasets/test-cleanup-controller b184e334-0ff8-4b1a-b4b0-f0e44ddcbd56 2815040 1 2020-03-26 00:26:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 14157f23-fce7-4ba5-a9d5-f01c6441c836 0xc0043e7957 0xc0043e7958}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0043e79b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 26 00:26:11.327: INFO: Pod "test-cleanup-controller-trrrk" is available: &Pod{ObjectMeta:{test-cleanup-controller-trrrk test-cleanup-controller- deployment-428 /api/v1/namespaces/deployment-428/pods/test-cleanup-controller-trrrk 8f4c2ef1-ebb4-4500-a0a8-559e34636826 2815026 0 2020-03-26 00:26:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller b184e334-0ff8-4b1a-b4b0-f0e44ddcbd56 0xc0043e7f47 0xc0043e7f48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ljbdq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ljbdq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ljbdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:26:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:26:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.57,StartTime:2020-03-26 00:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 00:26:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4c728b365e6a68df0aaf111eb48219e78baa903de2d38722d46f32bdc3fe8dd3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 00:26:11.327: INFO: Pod "test-cleanup-deployment-577c77b589-c5wkz" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-c5wkz test-cleanup-deployment-577c77b589- deployment-428 /api/v1/namespaces/deployment-428/pods/test-cleanup-deployment-577c77b589-c5wkz e5711b44-c39b-4c40-bde4-90213c7b1301 2815048 0 2020-03-26 00:26:11 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 8087ea2f-5273-4700-9ade-f03a54138bc6 0xc0042f80d7 0xc0042f80d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ljbdq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ljbdq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ljbdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 00:26:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:26:11.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-428" for this suite. • [SLOW TEST:5.308 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":172,"skipped":2799,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:26:11.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:26:11.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3981" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":173,"skipped":2854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:26:11.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7420 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-7420 Mar 26 00:26:12.109: INFO: Found 0 stateful pods, waiting for 1 Mar 26 00:26:22.116: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 26 00:26:22.131: INFO: Deleting all statefulset in ns statefulset-7420 Mar 26 00:26:22.133: INFO: Scaling statefulset ss to 0 Mar 26 00:26:42.262: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 00:26:42.266: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:26:42.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7420" for this suite. • [SLOW TEST:30.368 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":174,"skipped":2879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:26:42.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Mar 26 00:26:42.378: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 26 00:26:42.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7684' Mar 26 00:26:45.215: INFO: stderr: "" Mar 26 00:26:45.215: INFO: stdout: "service/agnhost-slave created\n" Mar 26 00:26:45.215: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 26 00:26:45.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7684' Mar 26 00:26:45.467: INFO: stderr: "" Mar 26 00:26:45.467: INFO: stdout: "service/agnhost-master created\n" Mar 26 00:26:45.467: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 26 00:26:45.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7684' Mar 26 00:26:45.716: INFO: stderr: "" Mar 26 00:26:45.716: INFO: stdout: "service/frontend created\n" Mar 26 00:26:45.716: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 26 00:26:45.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7684' Mar 26 00:26:45.963: INFO: stderr: "" Mar 26 00:26:45.963: INFO: stdout: "deployment.apps/frontend created\n" Mar 26 00:26:45.963: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 26 00:26:45.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7684' Mar 26 00:26:46.304: INFO: stderr: "" Mar 26 00:26:46.304: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 26 00:26:46.305: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 26 00:26:46.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7684' Mar 26 00:26:46.656: INFO: stderr: "" Mar 26 00:26:46.656: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 26 00:26:46.656: INFO: Waiting for all frontend pods to be Running. Mar 26 00:26:56.706: INFO: Waiting for frontend to serve content. Mar 26 00:26:56.717: INFO: Trying to add a new entry to the guestbook. Mar 26 00:26:56.729: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 26 00:26:56.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7684' Mar 26 00:26:56.925: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:26:56.925: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 26 00:26:56.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7684' Mar 26 00:26:57.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:26:57.110: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 26 00:26:57.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7684' Mar 26 00:26:57.231: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:26:57.231: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 26 00:26:57.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7684' Mar 26 00:26:57.326: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:26:57.326: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 26 00:26:57.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7684' Mar 26 00:26:57.422: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:26:57.422: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 26 00:26:57.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7684' Mar 26 00:26:57.544: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:26:57.544: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:26:57.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7684" for this suite. • [SLOW TEST:15.262 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":175,"skipped":2917,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:26:57.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:26:57.624: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b" in namespace "projected-1932" to be "Succeeded or Failed" Mar 26 00:26:57.679: INFO: Pod "downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.982059ms Mar 26 00:26:59.682: INFO: Pod "downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058574836s Mar 26 00:27:01.689: INFO: Pod "downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06591135s STEP: Saw pod success Mar 26 00:27:01.690: INFO: Pod "downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b" satisfied condition "Succeeded or Failed" Mar 26 00:27:01.695: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b container client-container: STEP: delete the pod Mar 26 00:27:01.727: INFO: Waiting for pod downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b to disappear Mar 26 00:27:01.737: INFO: Pod downwardapi-volume-7053b96c-c4a3-42e4-8698-3ac2b704f78b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:27:01.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1932" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2928,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:27:01.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:27:02.836: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:27:04.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779222, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779222, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779222, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779222, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:27:07.881: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:27:08.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2581" for this suite. STEP: Destroying namespace "webhook-2581-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.648 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":177,"skipped":2937,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:27:08.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 26 00:27:08.507: INFO: Waiting up to 5m0s for pod "pod-f1eaca2b-1209-4391-b4af-7f35b93496e5" in namespace "emptydir-6263" to be "Succeeded or Failed" Mar 26 00:27:08.510: INFO: Pod "pod-f1eaca2b-1209-4391-b4af-7f35b93496e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.456908ms Mar 26 00:27:10.514: INFO: Pod "pod-f1eaca2b-1209-4391-b4af-7f35b93496e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007503416s Mar 26 00:27:12.518: INFO: Pod "pod-f1eaca2b-1209-4391-b4af-7f35b93496e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011461586s STEP: Saw pod success Mar 26 00:27:12.518: INFO: Pod "pod-f1eaca2b-1209-4391-b4af-7f35b93496e5" satisfied condition "Succeeded or Failed" Mar 26 00:27:12.521: INFO: Trying to get logs from node latest-worker2 pod pod-f1eaca2b-1209-4391-b4af-7f35b93496e5 container test-container: STEP: delete the pod Mar 26 00:27:12.556: INFO: Waiting for pod pod-f1eaca2b-1209-4391-b4af-7f35b93496e5 to disappear Mar 26 00:27:12.560: INFO: Pod pod-f1eaca2b-1209-4391-b4af-7f35b93496e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:27:12.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6263" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":2953,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:27:12.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-8668 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8668 to expose endpoints map[] Mar 26 00:27:12.645: INFO: Get endpoints failed (6.522502ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 26 00:27:13.648: INFO: successfully validated that service endpoint-test2 in namespace services-8668 exposes endpoints map[] (1.009773321s elapsed) STEP: Creating pod pod1 in namespace services-8668 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8668 to expose endpoints map[pod1:[80]] Mar 26 00:27:16.688: INFO: successfully validated that service endpoint-test2 in namespace services-8668 exposes endpoints map[pod1:[80]] (3.03334199s elapsed) STEP: Creating pod pod2 in namespace services-8668 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8668 to expose endpoints map[pod1:[80] pod2:[80]] Mar 26 00:27:19.853: INFO: successfully validated that service endpoint-test2 in namespace services-8668 exposes endpoints map[pod1:[80] pod2:[80]] (3.160476486s elapsed) STEP: Deleting pod pod1 in namespace services-8668 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8668 to expose endpoints map[pod2:[80]] Mar 26 00:27:20.891: INFO: successfully validated that service endpoint-test2 in namespace services-8668 exposes endpoints map[pod2:[80]] (1.03371741s elapsed) STEP: Deleting pod pod2 in namespace services-8668 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8668 to expose endpoints map[] Mar 26 00:27:21.906: INFO: successfully validated that service endpoint-test2 in namespace services-8668 exposes endpoints map[] (1.008567993s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:27:21.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8668" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.457 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":179,"skipped":2973,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:27:22.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 26 00:27:30.139: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 26 00:27:30.145: INFO: Pod pod-with-poststart-http-hook still exists Mar 26 00:27:32.146: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 26 00:27:32.150: INFO: Pod pod-with-poststart-http-hook still exists Mar 26 00:27:34.145: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 26 00:27:34.150: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:27:34.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7203" for this suite. • [SLOW TEST:12.133 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":2984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:27:34.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 26 00:27:34.238: INFO: Waiting up to 5m0s for pod "pod-36dcd221-7714-4852-86e8-fd066ec03dcb" in namespace "emptydir-3111" to be "Succeeded or Failed" Mar 26 00:27:34.282: INFO: Pod "pod-36dcd221-7714-4852-86e8-fd066ec03dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.724361ms Mar 26 00:27:36.288: INFO: Pod "pod-36dcd221-7714-4852-86e8-fd066ec03dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050318271s Mar 26 00:27:38.292: INFO: Pod "pod-36dcd221-7714-4852-86e8-fd066ec03dcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054432443s STEP: Saw pod success Mar 26 00:27:38.292: INFO: Pod "pod-36dcd221-7714-4852-86e8-fd066ec03dcb" satisfied condition "Succeeded or Failed" Mar 26 00:27:38.295: INFO: Trying to get logs from node latest-worker pod pod-36dcd221-7714-4852-86e8-fd066ec03dcb container test-container: STEP: delete the pod Mar 26 00:27:38.331: INFO: Waiting for pod pod-36dcd221-7714-4852-86e8-fd066ec03dcb to disappear Mar 26 00:27:38.354: INFO: Pod pod-36dcd221-7714-4852-86e8-fd066ec03dcb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:27:38.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3111" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3027,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:27:38.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 26 00:27:41.510: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:27:41.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8654" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3040,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:27:41.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-5xbx STEP: Creating a pod to test atomic-volume-subpath Mar 26 00:27:41.773: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5xbx" in namespace "subpath-9758" to be "Succeeded or Failed" Mar 26 00:27:41.777: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Pending", Reason="", readiness=false. Elapsed: 3.963335ms Mar 26 00:27:43.780: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006896792s Mar 26 00:27:45.784: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 4.011232876s Mar 26 00:27:47.789: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 6.015491083s Mar 26 00:27:49.793: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 8.019961645s Mar 26 00:27:51.798: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 10.024494329s Mar 26 00:27:53.802: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 12.028728359s Mar 26 00:27:55.806: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 14.033252693s Mar 26 00:27:57.811: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 16.037607571s Mar 26 00:27:59.815: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 18.041833402s Mar 26 00:28:01.819: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 20.046003357s Mar 26 00:28:03.823: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Running", Reason="", readiness=true. Elapsed: 22.05000205s Mar 26 00:28:05.827: INFO: Pod "pod-subpath-test-secret-5xbx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054238509s STEP: Saw pod success Mar 26 00:28:05.827: INFO: Pod "pod-subpath-test-secret-5xbx" satisfied condition "Succeeded or Failed" Mar 26 00:28:05.831: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-5xbx container test-container-subpath-secret-5xbx: STEP: delete the pod Mar 26 00:28:05.870: INFO: Waiting for pod pod-subpath-test-secret-5xbx to disappear Mar 26 00:28:05.883: INFO: Pod pod-subpath-test-secret-5xbx no longer exists STEP: Deleting pod pod-subpath-test-secret-5xbx Mar 26 00:28:05.883: INFO: Deleting pod "pod-subpath-test-secret-5xbx" in namespace "subpath-9758" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:28:05.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9758" for this suite. • [SLOW TEST:24.263 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":183,"skipped":3043,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:28:05.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 26 00:28:06.463: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 26 00:28:08.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779286, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779286, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779286, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779286, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:28:11.504: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:28:11.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:28:12.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7646" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.946 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":184,"skipped":3051,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:28:12.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-491/secret-test-c05da1ad-5d24-4cd1-92ff-4560a56f8bfd STEP: Creating a pod to test consume secrets Mar 26 00:28:12.936: INFO: Waiting up to 5m0s for pod "pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268" in namespace "secrets-491" to be "Succeeded or Failed" Mar 26 00:28:12.942: INFO: Pod "pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019577ms Mar 26 00:28:14.946: INFO: Pod "pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009678612s Mar 26 00:28:16.950: INFO: Pod "pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013979079s STEP: Saw pod success Mar 26 00:28:16.950: INFO: Pod "pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268" satisfied condition "Succeeded or Failed" Mar 26 00:28:16.954: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268 container env-test: STEP: delete the pod Mar 26 00:28:17.010: INFO: Waiting for pod pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268 to disappear Mar 26 00:28:17.020: INFO: Pod pod-configmaps-3389e7a5-6069-4b12-9df9-b3025d83d268 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:28:17.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-491" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3060,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:28:17.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:28:17.704: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:28:19.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:28:22.746: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:28:22.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5222" for this suite. STEP: Destroying namespace "webhook-5222-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.041 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":186,"skipped":3061,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:28:23.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:28:27.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8746" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3066,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:28:27.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:28:27.806: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:28:29.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779307, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779307, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779307, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779307, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:28:32.849: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:28:33.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2355" for this suite. STEP: Destroying namespace "webhook-2355-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":188,"skipped":3068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:28:33.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-6f8252a6-b67b-4d9f-965d-6066fca049d1 in namespace container-probe-1026 Mar 26 00:28:37.287: INFO: Started pod liveness-6f8252a6-b67b-4d9f-965d-6066fca049d1 in namespace container-probe-1026 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 00:28:37.289: INFO: Initial restart count of pod liveness-6f8252a6-b67b-4d9f-965d-6066fca049d1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:32:38.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1026" for this suite. • [SLOW TEST:245.023 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:32:38.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:32:39.216: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:32:41.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779559, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779559, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779559, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779559, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:32:44.251: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:32:44.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7250" for this suite. STEP: Destroying namespace "webhook-7250-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.211 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":190,"skipped":3125,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:32:44.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-4c789918-ccf1-4a98-ad4f-fa3bc7ba49d1 STEP: Creating a pod to test consume secrets Mar 26 00:32:44.428: INFO: Waiting up to 5m0s for pod "pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33" in namespace "secrets-2547" to be "Succeeded or Failed" Mar 26 00:32:44.455: INFO: Pod "pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33": Phase="Pending", Reason="", readiness=false. Elapsed: 27.015301ms Mar 26 00:32:46.477: INFO: Pod "pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048967837s Mar 26 00:32:48.482: INFO: Pod "pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053827786s STEP: Saw pod success Mar 26 00:32:48.482: INFO: Pod "pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33" satisfied condition "Succeeded or Failed" Mar 26 00:32:48.484: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33 container secret-volume-test: STEP: delete the pod Mar 26 00:32:48.517: INFO: Waiting for pod pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33 to disappear Mar 26 00:32:48.533: INFO: Pod pod-secrets-9574f0d9-1f82-4c2d-a06c-efa04be31d33 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:32:48.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2547" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3125,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:32:48.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 26 00:32:48.625: INFO: Waiting up to 5m0s for pod "pod-da106016-1647-4f54-866a-950055f3fa03" in namespace "emptydir-5475" to be "Succeeded or Failed" Mar 26 00:32:48.642: INFO: Pod "pod-da106016-1647-4f54-866a-950055f3fa03": Phase="Pending", Reason="", readiness=false. Elapsed: 16.664202ms Mar 26 00:32:50.646: INFO: Pod "pod-da106016-1647-4f54-866a-950055f3fa03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020718463s Mar 26 00:32:52.650: INFO: Pod "pod-da106016-1647-4f54-866a-950055f3fa03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025044722s STEP: Saw pod success Mar 26 00:32:52.650: INFO: Pod "pod-da106016-1647-4f54-866a-950055f3fa03" satisfied condition "Succeeded or Failed" Mar 26 00:32:52.654: INFO: Trying to get logs from node latest-worker pod pod-da106016-1647-4f54-866a-950055f3fa03 container test-container: STEP: delete the pod Mar 26 00:32:52.691: INFO: Waiting for pod pod-da106016-1647-4f54-866a-950055f3fa03 to disappear Mar 26 00:32:52.719: INFO: Pod pod-da106016-1647-4f54-866a-950055f3fa03 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:32:52.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5475" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3126,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:32:52.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3946.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3946.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3946.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3946.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3946.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3946.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 00:32:56.893: INFO: DNS probes using dns-3946/dns-test-c0d064f9-6a38-40b9-891c-92d9317be7e7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:32:56.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3946" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":193,"skipped":3139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:32:56.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Mar 26 00:32:57.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Mar 26 00:32:57.543: INFO: stderr: "" Mar 26 00:32:57.543: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:32:57.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2731" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":194,"skipped":3173,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:32:57.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:32:57.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2842" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":195,"skipped":3184,"failed":0} ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:32:57.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-1a8a86dd-6fe2-4887-855a-aa0d57f951a1 STEP: Creating configMap with name cm-test-opt-upd-7933289f-7a51-4546-9300-1248cdcb9a01 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1a8a86dd-6fe2-4887-855a-aa0d57f951a1 STEP: Updating configmap cm-test-opt-upd-7933289f-7a51-4546-9300-1248cdcb9a01 STEP: Creating configMap with name cm-test-opt-create-c5aa564e-2d73-4cdc-8205-de057969ceea STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:34:08.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4126" for this suite. • [SLOW TEST:70.734 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3184,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:34:08.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:34:08.876: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:34:10.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779648, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779648, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779649, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779648, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:34:13.919: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:34:13.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2741-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:34:15.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6780" for this suite. STEP: Destroying namespace "webhook-6780-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.753 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":197,"skipped":3201,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:34:15.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 26 00:34:15.349: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:15.354: INFO: Number of nodes with available pods: 0 Mar 26 00:34:15.355: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:16.359: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:16.362: INFO: Number of nodes with available pods: 0 Mar 26 00:34:16.362: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:17.359: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:17.362: INFO: Number of nodes with available pods: 0 Mar 26 00:34:17.363: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:18.397: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:18.401: INFO: Number of nodes with available pods: 0 Mar 26 00:34:18.401: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:19.357: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:19.360: INFO: Number of nodes with available pods: 2 Mar 26 00:34:19.360: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 26 00:34:19.390: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:19.394: INFO: Number of nodes with available pods: 1 Mar 26 00:34:19.394: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:20.409: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:20.416: INFO: Number of nodes with available pods: 1 Mar 26 00:34:20.416: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:21.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:21.401: INFO: Number of nodes with available pods: 1 Mar 26 00:34:21.401: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:22.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:22.402: INFO: Number of nodes with available pods: 1 Mar 26 00:34:22.402: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:23.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:23.403: INFO: Number of nodes with available pods: 1 Mar 26 00:34:23.403: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:24.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:24.514: INFO: Number of nodes with available pods: 1 Mar 26 00:34:24.514: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:25.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:25.402: INFO: Number of nodes with available pods: 1 Mar 26 00:34:25.402: INFO: Node latest-worker is running more than one daemon pod Mar 26 00:34:26.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 26 00:34:26.402: INFO: Number of nodes with available pods: 2 Mar 26 00:34:26.402: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6548, will wait for the garbage collector to delete the pods Mar 26 00:34:26.465: INFO: Deleting DaemonSet.extensions daemon-set took: 6.159658ms Mar 26 00:34:26.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27831ms Mar 26 00:34:32.791: INFO: Number of nodes with available pods: 0 Mar 26 00:34:32.791: INFO: Number of running nodes: 0, number of available pods: 0 Mar 26 00:34:32.794: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6548/daemonsets","resourceVersion":"2817760"},"items":null} Mar 26 00:34:32.796: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6548/pods","resourceVersion":"2817760"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:34:32.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6548" for this suite. • [SLOW TEST:17.557 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":198,"skipped":3218,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:34:32.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 26 00:34:32.885: INFO: Waiting up to 5m0s for pod "pod-d270d2ff-5efa-48c8-8718-e90e6f414f28" in namespace "emptydir-9595" to be "Succeeded or Failed" Mar 26 00:34:32.888: INFO: Pod "pod-d270d2ff-5efa-48c8-8718-e90e6f414f28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801188ms Mar 26 00:34:34.901: INFO: Pod "pod-d270d2ff-5efa-48c8-8718-e90e6f414f28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016593392s Mar 26 00:34:36.906: INFO: Pod "pod-d270d2ff-5efa-48c8-8718-e90e6f414f28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021102959s STEP: Saw pod success Mar 26 00:34:36.906: INFO: Pod "pod-d270d2ff-5efa-48c8-8718-e90e6f414f28" satisfied condition "Succeeded or Failed" Mar 26 00:34:36.909: INFO: Trying to get logs from node latest-worker2 pod pod-d270d2ff-5efa-48c8-8718-e90e6f414f28 container test-container: STEP: delete the pod Mar 26 00:34:36.952: INFO: Waiting for pod pod-d270d2ff-5efa-48c8-8718-e90e6f414f28 to disappear Mar 26 00:34:36.976: INFO: Pod pod-d270d2ff-5efa-48c8-8718-e90e6f414f28 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:34:36.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9595" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3232,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:34:36.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 26 00:34:37.040: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 26 00:34:37.044: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 26 00:34:37.044: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 26 00:34:37.050: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 26 00:34:37.050: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 26 00:34:37.118: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 26 00:34:37.118: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 26 00:34:44.157: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:34:44.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-99" for this suite. • [SLOW TEST:7.200 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":200,"skipped":3239,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:34:44.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 26 00:34:49.362: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:34:50.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-302" for this suite. • [SLOW TEST:6.472 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":201,"skipped":3243,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:34:50.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 26 00:34:51.035: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:34:57.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7203" for this suite. • [SLOW TEST:6.546 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":202,"skipped":3257,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:34:57.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-dbfw STEP: Creating a pod to test atomic-volume-subpath Mar 26 00:34:57.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dbfw" in namespace "subpath-5045" to be "Succeeded or Failed" Mar 26 00:34:57.630: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Pending", Reason="", readiness=false. Elapsed: 186.040125ms Mar 26 00:34:59.635: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190227106s Mar 26 00:35:01.639: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 4.194418758s Mar 26 00:35:03.643: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 6.198455626s Mar 26 00:35:05.647: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 8.203032579s Mar 26 00:35:07.651: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 10.207050841s Mar 26 00:35:09.656: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 12.211743169s Mar 26 00:35:11.660: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 14.215864152s Mar 26 00:35:13.665: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 16.220184282s Mar 26 00:35:15.669: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 18.224528798s Mar 26 00:35:17.672: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 20.22817041s Mar 26 00:35:19.677: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Running", Reason="", readiness=true. Elapsed: 22.232801123s Mar 26 00:35:21.681: INFO: Pod "pod-subpath-test-configmap-dbfw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.237022731s STEP: Saw pod success Mar 26 00:35:21.681: INFO: Pod "pod-subpath-test-configmap-dbfw" satisfied condition "Succeeded or Failed" Mar 26 00:35:21.684: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-dbfw container test-container-subpath-configmap-dbfw: STEP: delete the pod Mar 26 00:35:21.706: INFO: Waiting for pod pod-subpath-test-configmap-dbfw to disappear Mar 26 00:35:21.710: INFO: Pod pod-subpath-test-configmap-dbfw no longer exists STEP: Deleting pod pod-subpath-test-configmap-dbfw Mar 26 00:35:21.710: INFO: Deleting pod "pod-subpath-test-configmap-dbfw" in namespace "subpath-5045" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:35:21.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5045" for this suite. • [SLOW TEST:24.517 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":203,"skipped":3257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:35:21.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-tmhw STEP: Creating a pod to test atomic-volume-subpath Mar 26 00:35:21.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tmhw" in namespace "subpath-6859" to be "Succeeded or Failed" Mar 26 00:35:21.827: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.853016ms Mar 26 00:35:23.831: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02308319s Mar 26 00:35:25.835: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 4.027116363s Mar 26 00:35:27.839: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 6.031416572s Mar 26 00:35:29.844: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 8.035873145s Mar 26 00:35:31.847: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 10.039353397s Mar 26 00:35:33.852: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 12.043636008s Mar 26 00:35:35.856: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 14.047657314s Mar 26 00:35:37.860: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 16.051842406s Mar 26 00:35:39.863: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 18.055251677s Mar 26 00:35:41.867: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 20.059345085s Mar 26 00:35:43.872: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Running", Reason="", readiness=true. Elapsed: 22.063672769s Mar 26 00:35:45.876: INFO: Pod "pod-subpath-test-projected-tmhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068288918s STEP: Saw pod success Mar 26 00:35:45.876: INFO: Pod "pod-subpath-test-projected-tmhw" satisfied condition "Succeeded or Failed" Mar 26 00:35:45.880: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-tmhw container test-container-subpath-projected-tmhw: STEP: delete the pod Mar 26 00:35:45.898: INFO: Waiting for pod pod-subpath-test-projected-tmhw to disappear Mar 26 00:35:45.902: INFO: Pod pod-subpath-test-projected-tmhw no longer exists STEP: Deleting pod pod-subpath-test-projected-tmhw Mar 26 00:35:45.902: INFO: Deleting pod "pod-subpath-test-projected-tmhw" in namespace "subpath-6859" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:35:45.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6859" for this suite. • [SLOW TEST:24.211 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":204,"skipped":3287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:35:45.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:35:45.988: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 26 00:35:48.041: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:35:49.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8639" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":205,"skipped":3332,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:35:49.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2556 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2556 I0326 00:35:50.134518 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2556, replica count: 2 I0326 00:35:53.185004 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 00:35:56.185436 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 26 00:35:56.185: INFO: Creating new exec pod Mar 26 00:36:01.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2556 execpodc7vmt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 26 00:36:01.424: INFO: stderr: "I0326 00:36:01.330844 1878 log.go:172] (0xc0009146e0) (0xc000675540) Create stream\nI0326 00:36:01.330899 1878 log.go:172] (0xc0009146e0) (0xc000675540) Stream added, broadcasting: 1\nI0326 00:36:01.335022 1878 log.go:172] (0xc0009146e0) Reply frame received for 1\nI0326 00:36:01.335078 1878 log.go:172] (0xc0009146e0) (0xc0006755e0) Create stream\nI0326 00:36:01.335097 1878 log.go:172] (0xc0009146e0) (0xc0006755e0) Stream added, broadcasting: 3\nI0326 00:36:01.336238 1878 log.go:172] (0xc0009146e0) Reply frame received for 3\nI0326 00:36:01.336286 1878 log.go:172] (0xc0009146e0) (0xc000a2e000) Create stream\nI0326 00:36:01.336306 1878 log.go:172] (0xc0009146e0) (0xc000a2e000) Stream added, broadcasting: 5\nI0326 00:36:01.337422 1878 log.go:172] (0xc0009146e0) Reply frame received for 5\nI0326 00:36:01.418759 1878 log.go:172] (0xc0009146e0) Data frame received for 5\nI0326 00:36:01.418788 1878 log.go:172] (0xc000a2e000) (5) Data frame handling\nI0326 00:36:01.418805 1878 log.go:172] (0xc000a2e000) (5) Data frame sent\nI0326 00:36:01.418813 1878 log.go:172] (0xc0009146e0) Data frame received for 5\nI0326 00:36:01.418819 1878 log.go:172] (0xc000a2e000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0326 00:36:01.418835 1878 log.go:172] (0xc000a2e000) (5) Data frame sent\nI0326 00:36:01.418897 1878 log.go:172] (0xc0009146e0) Data frame received for 3\nI0326 00:36:01.418925 1878 log.go:172] (0xc0006755e0) (3) Data frame handling\nI0326 00:36:01.419143 1878 log.go:172] (0xc0009146e0) Data frame received for 5\nI0326 00:36:01.419158 1878 log.go:172] (0xc000a2e000) (5) Data frame handling\nI0326 00:36:01.420778 1878 log.go:172] (0xc0009146e0) Data frame received for 1\nI0326 00:36:01.420792 1878 log.go:172] (0xc000675540) (1) Data frame handling\nI0326 00:36:01.420809 1878 log.go:172] (0xc000675540) (1) Data frame sent\nI0326 00:36:01.420827 1878 log.go:172] (0xc0009146e0) (0xc000675540) Stream removed, broadcasting: 1\nI0326 00:36:01.420855 1878 log.go:172] (0xc0009146e0) Go away received\nI0326 00:36:01.421318 1878 log.go:172] (0xc0009146e0) (0xc000675540) Stream removed, broadcasting: 1\nI0326 00:36:01.421342 1878 log.go:172] (0xc0009146e0) (0xc0006755e0) Stream removed, broadcasting: 3\nI0326 00:36:01.421350 1878 log.go:172] (0xc0009146e0) (0xc000a2e000) Stream removed, broadcasting: 5\n" Mar 26 00:36:01.425: INFO: stdout: "" Mar 26 00:36:01.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2556 execpodc7vmt -- /bin/sh -x -c nc -zv -t -w 2 10.96.31.79 80' Mar 26 00:36:01.635: INFO: stderr: "I0326 00:36:01.551300 1900 log.go:172] (0xc000928000) (0xc000827220) Create stream\nI0326 00:36:01.551387 1900 log.go:172] (0xc000928000) (0xc000827220) Stream added, broadcasting: 1\nI0326 00:36:01.554744 1900 log.go:172] (0xc000928000) Reply frame received for 1\nI0326 00:36:01.554796 1900 log.go:172] (0xc000928000) (0xc000550000) Create stream\nI0326 00:36:01.554810 1900 log.go:172] (0xc000928000) (0xc000550000) Stream added, broadcasting: 3\nI0326 00:36:01.555820 1900 log.go:172] (0xc000928000) Reply frame received for 3\nI0326 00:36:01.555862 1900 log.go:172] (0xc000928000) (0xc000827400) Create stream\nI0326 00:36:01.555878 1900 log.go:172] (0xc000928000) (0xc000827400) Stream added, broadcasting: 5\nI0326 00:36:01.556707 1900 log.go:172] (0xc000928000) Reply frame received for 5\nI0326 00:36:01.629262 1900 log.go:172] (0xc000928000) Data frame received for 3\nI0326 00:36:01.629323 1900 log.go:172] (0xc000550000) (3) Data frame handling\nI0326 00:36:01.629358 1900 log.go:172] (0xc000928000) Data frame received for 5\nI0326 00:36:01.629383 1900 log.go:172] (0xc000827400) (5) Data frame handling\nI0326 00:36:01.629403 1900 log.go:172] (0xc000827400) (5) Data frame sent\nI0326 00:36:01.629425 1900 log.go:172] (0xc000928000) Data frame received for 5\nI0326 00:36:01.629442 1900 log.go:172] (0xc000827400) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.31.79 80\nConnection to 10.96.31.79 80 port [tcp/http] succeeded!\nI0326 00:36:01.631031 1900 log.go:172] (0xc000928000) Data frame received for 1\nI0326 00:36:01.631061 1900 log.go:172] (0xc000827220) (1) Data frame handling\nI0326 00:36:01.631078 1900 log.go:172] (0xc000827220) (1) Data frame sent\nI0326 00:36:01.631096 1900 log.go:172] (0xc000928000) (0xc000827220) Stream removed, broadcasting: 1\nI0326 00:36:01.631116 1900 log.go:172] (0xc000928000) Go away received\nI0326 00:36:01.631547 1900 log.go:172] (0xc000928000) (0xc000827220) Stream removed, broadcasting: 1\nI0326 00:36:01.631572 1900 log.go:172] (0xc000928000) (0xc000550000) Stream removed, broadcasting: 3\nI0326 00:36:01.631585 1900 log.go:172] (0xc000928000) (0xc000827400) Stream removed, broadcasting: 5\n" Mar 26 00:36:01.635: INFO: stdout: "" Mar 26 00:36:01.635: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:01.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2556" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.160 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":206,"skipped":3345,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:01.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 26 00:36:01.729: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:15.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2641" for this suite. • [SLOW TEST:14.261 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":207,"skipped":3367,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:15.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:36:16.054: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:20.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6194" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3375,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:20.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:36:20.187: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 26 00:36:22.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 create -f -' Mar 26 00:36:25.016: INFO: stderr: "" Mar 26 00:36:25.016: INFO: stdout: "e2e-test-crd-publish-openapi-2105-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 26 00:36:25.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 delete e2e-test-crd-publish-openapi-2105-crds test-foo' Mar 26 00:36:25.122: INFO: stderr: "" Mar 26 00:36:25.122: INFO: stdout: "e2e-test-crd-publish-openapi-2105-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 26 00:36:25.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 apply -f -' Mar 26 00:36:25.364: INFO: stderr: "" Mar 26 00:36:25.364: INFO: stdout: "e2e-test-crd-publish-openapi-2105-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 26 00:36:25.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 delete e2e-test-crd-publish-openapi-2105-crds test-foo' Mar 26 00:36:25.469: INFO: stderr: "" Mar 26 00:36:25.469: INFO: stdout: "e2e-test-crd-publish-openapi-2105-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 26 00:36:25.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 create -f -' Mar 26 00:36:25.692: INFO: rc: 1 Mar 26 00:36:25.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 apply -f -' Mar 26 00:36:25.920: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 26 00:36:25.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 create -f -' Mar 26 00:36:26.147: INFO: rc: 1 Mar 26 00:36:26.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3204 apply -f -' Mar 26 00:36:26.387: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 26 00:36:26.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2105-crds' Mar 26 00:36:26.619: INFO: stderr: "" Mar 26 00:36:26.619: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 26 00:36:26.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2105-crds.metadata' Mar 26 00:36:26.863: INFO: stderr: "" Mar 26 00:36:26.863: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 26 00:36:26.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2105-crds.spec' Mar 26 00:36:27.093: INFO: stderr: "" Mar 26 00:36:27.093: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 26 00:36:27.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2105-crds.spec.bars' Mar 26 00:36:27.317: INFO: stderr: "" Mar 26 00:36:27.317: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2105-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 26 00:36:27.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2105-crds.spec.bars2' Mar 26 00:36:27.551: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:29.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3204" for this suite. • [SLOW TEST:9.372 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":209,"skipped":3387,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:29.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 26 00:36:29.536: INFO: Waiting up to 5m0s for pod "pod-facc3b12-6a77-4c8d-8100-e647bff7afa9" in namespace "emptydir-2105" to be "Succeeded or Failed" Mar 26 00:36:29.559: INFO: Pod "pod-facc3b12-6a77-4c8d-8100-e647bff7afa9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.741183ms Mar 26 00:36:31.562: INFO: Pod "pod-facc3b12-6a77-4c8d-8100-e647bff7afa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026297442s Mar 26 00:36:33.566: INFO: Pod "pod-facc3b12-6a77-4c8d-8100-e647bff7afa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030326532s STEP: Saw pod success Mar 26 00:36:33.566: INFO: Pod "pod-facc3b12-6a77-4c8d-8100-e647bff7afa9" satisfied condition "Succeeded or Failed" Mar 26 00:36:33.570: INFO: Trying to get logs from node latest-worker pod pod-facc3b12-6a77-4c8d-8100-e647bff7afa9 container test-container: STEP: delete the pod Mar 26 00:36:33.611: INFO: Waiting for pod pod-facc3b12-6a77-4c8d-8100-e647bff7afa9 to disappear Mar 26 00:36:33.616: INFO: Pod pod-facc3b12-6a77-4c8d-8100-e647bff7afa9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:33.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2105" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3397,"failed":0} ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:33.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:36:33.692: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:37.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9855" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3397,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:37.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:36:38.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:36:40.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779798, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779798, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779798, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779798, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:36:43.576: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:43.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3471" for this suite. STEP: Destroying namespace "webhook-3471-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.944 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":212,"skipped":3406,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:43.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-324006ce-6371-4325-950f-8f6804f8e80c STEP: Creating a pod to test consume secrets Mar 26 00:36:43.803: INFO: Waiting up to 5m0s for pod "pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af" in namespace "secrets-5616" to be "Succeeded or Failed" Mar 26 00:36:43.822: INFO: Pod "pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af": Phase="Pending", Reason="", readiness=false. Elapsed: 18.631162ms Mar 26 00:36:45.832: INFO: Pod "pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029101481s Mar 26 00:36:47.837: INFO: Pod "pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033672137s STEP: Saw pod success Mar 26 00:36:47.837: INFO: Pod "pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af" satisfied condition "Succeeded or Failed" Mar 26 00:36:47.840: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af container secret-env-test: STEP: delete the pod Mar 26 00:36:47.859: INFO: Waiting for pod pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af to disappear Mar 26 00:36:47.862: INFO: Pod pod-secrets-98bd1bca-3d98-4307-a21d-896ecc4d09af no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:47.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5616" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3409,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:47.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:36:47.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5706' Mar 26 00:36:48.226: INFO: stderr: "" Mar 26 00:36:48.226: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 26 00:36:48.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5706' Mar 26 00:36:48.478: INFO: stderr: "" Mar 26 00:36:48.478: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 26 00:36:49.494: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:36:49.494: INFO: Found 0 / 1 Mar 26 00:36:50.483: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:36:50.483: INFO: Found 1 / 1 Mar 26 00:36:50.483: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 26 00:36:50.486: INFO: Selector matched 1 pods for map[app:agnhost] Mar 26 00:36:50.486: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 26 00:36:50.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-g4ct4 --namespace=kubectl-5706' Mar 26 00:36:50.642: INFO: stderr: "" Mar 26 00:36:50.642: INFO: stdout: "Name: agnhost-master-g4ct4\nNamespace: kubectl-5706\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Thu, 26 Mar 2020 00:36:48 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.83\nIPs:\n IP: 10.244.1.83\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://5acf98a08b8e76eddf79486ae2d5b1f7fdc7347a6214ed19931e6a92e073fffa\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 26 Mar 2020 00:36:50 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-82464 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-82464:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-82464\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-5706/agnhost-master-g4ct4 to latest-worker2\n Normal Pulled 1s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 0s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 0s kubelet, latest-worker2 Started container agnhost-master\n" Mar 26 00:36:50.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5706' Mar 26 00:36:50.755: INFO: stderr: "" Mar 26 00:36:50.755: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5706\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-g4ct4\n" Mar 26 00:36:50.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5706' Mar 26 00:36:50.870: INFO: stderr: "" Mar 26 00:36:50.870: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5706\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.246.111\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.83:6379\nSession Affinity: None\nEvents: \n" Mar 26 00:36:50.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 26 00:36:50.972: INFO: stderr: "" Mar 26 00:36:50.972: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 26 Mar 2020 00:36:50 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 26 Mar 2020 00:34:33 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 26 Mar 2020 00:34:33 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 26 Mar 2020 00:34:33 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 26 Mar 2020 00:34:33 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 10d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 10d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 10d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 26 00:36:50.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-5706' Mar 26 00:36:51.063: INFO: stderr: "" Mar 26 00:36:51.063: INFO: stdout: "Name: kubectl-5706\nLabels: e2e-framework=kubectl\n e2e-run=5c7da4cb-aa7e-4d4e-bdbe-efbb09c143c4\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:51.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5706" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":214,"skipped":3410,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:51.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-499e0334-306f-4ff9-a85d-290fefaad219 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:51.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3428" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":215,"skipped":3416,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:51.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-529909bc-db8b-4164-8bc9-46fc529b3842 STEP: Creating a pod to test consume secrets Mar 26 00:36:51.242: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5" in namespace "projected-9200" to be "Succeeded or Failed" Mar 26 00:36:51.246: INFO: Pod "pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.356934ms Mar 26 00:36:53.316: INFO: Pod "pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07410981s Mar 26 00:36:55.321: INFO: Pod "pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078495179s STEP: Saw pod success Mar 26 00:36:55.321: INFO: Pod "pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5" satisfied condition "Succeeded or Failed" Mar 26 00:36:55.323: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5 container projected-secret-volume-test: STEP: delete the pod Mar 26 00:36:55.412: INFO: Waiting for pod pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5 to disappear Mar 26 00:36:55.431: INFO: Pod pod-projected-secrets-d2bcae75-7147-49cf-95f4-befae83001b5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:55.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9200" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3419,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:55.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-b4e3a03c-6efe-4b9c-abf9-f9081192f1df STEP: Creating a pod to test consume configMaps Mar 26 00:36:55.638: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c" in namespace "projected-7116" to be "Succeeded or Failed" Mar 26 00:36:55.641: INFO: Pod "pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310994ms Mar 26 00:36:57.683: INFO: Pod "pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045092093s Mar 26 00:36:59.687: INFO: Pod "pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049246598s STEP: Saw pod success Mar 26 00:36:59.687: INFO: Pod "pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c" satisfied condition "Succeeded or Failed" Mar 26 00:36:59.691: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c container projected-configmap-volume-test: STEP: delete the pod Mar 26 00:36:59.743: INFO: Waiting for pod pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c to disappear Mar 26 00:36:59.749: INFO: Pod pod-projected-configmaps-2f2c4dac-cb10-4847-92cf-7b3df8d91b7c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:36:59.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7116" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:36:59.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0326 00:37:00.848621 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 00:37:00.848: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:37:00.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8255" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":218,"skipped":3448,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:37:00.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4972/configmap-test-b3f49856-ec43-40ef-9fa3-ba89eb17a90b STEP: Creating a pod to test consume configMaps Mar 26 00:37:00.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71" in namespace "configmap-4972" to be "Succeeded or Failed" Mar 26 00:37:01.017: INFO: Pod "pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71": Phase="Pending", Reason="", readiness=false. Elapsed: 20.84412ms Mar 26 00:37:03.020: INFO: Pod "pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023922727s Mar 26 00:37:05.023: INFO: Pod "pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026584861s STEP: Saw pod success Mar 26 00:37:05.023: INFO: Pod "pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71" satisfied condition "Succeeded or Failed" Mar 26 00:37:05.025: INFO: Trying to get logs from node latest-worker pod pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71 container env-test: STEP: delete the pod Mar 26 00:37:05.038: INFO: Waiting for pod pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71 to disappear Mar 26 00:37:05.056: INFO: Pod pod-configmaps-eca35c19-e3bb-4b85-b375-89cc6a309a71 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:37:05.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4972" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:37:05.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 26 00:37:05.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-126' Mar 26 00:37:05.198: INFO: stderr: "" Mar 26 00:37:05.198: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Mar 26 00:37:05.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-126' Mar 26 00:37:12.994: INFO: stderr: "" Mar 26 00:37:12.994: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:37:12.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-126" for this suite. • [SLOW TEST:7.952 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":220,"skipped":3543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:37:13.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4822 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 26 00:37:13.056: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 26 00:37:13.342: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 26 00:37:15.366: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 26 00:37:17.346: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:19.346: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:21.345: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:23.345: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:25.347: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:27.346: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:29.346: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:31.346: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:33.346: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 26 00:37:35.348: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 26 00:37:35.353: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 26 00:37:37.358: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 26 00:37:41.382: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.199:8080/dial?request=hostname&protocol=udp&host=10.244.2.198&port=8081&tries=1'] Namespace:pod-network-test-4822 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 00:37:41.382: INFO: >>> kubeConfig: /root/.kube/config I0326 00:37:41.415811 7 log.go:172] (0xc002c264d0) (0xc0010e65a0) Create stream I0326 00:37:41.415857 7 log.go:172] (0xc002c264d0) (0xc0010e65a0) Stream added, broadcasting: 1 I0326 00:37:41.418139 7 log.go:172] (0xc002c264d0) Reply frame received for 1 I0326 00:37:41.418177 7 log.go:172] (0xc002c264d0) (0xc0010e6640) Create stream I0326 00:37:41.418190 7 log.go:172] (0xc002c264d0) (0xc0010e6640) Stream added, broadcasting: 3 I0326 00:37:41.419255 7 log.go:172] (0xc002c264d0) Reply frame received for 3 I0326 00:37:41.419293 7 log.go:172] (0xc002c264d0) (0xc0010e66e0) Create stream I0326 00:37:41.419307 7 log.go:172] (0xc002c264d0) (0xc0010e66e0) Stream added, broadcasting: 5 I0326 00:37:41.420417 7 log.go:172] (0xc002c264d0) Reply frame received for 5 I0326 00:37:41.513696 7 log.go:172] (0xc002c264d0) Data frame received for 3 I0326 00:37:41.513733 7 log.go:172] (0xc0010e6640) (3) Data frame handling I0326 00:37:41.513754 7 log.go:172] (0xc0010e6640) (3) Data frame sent I0326 00:37:41.514139 7 log.go:172] (0xc002c264d0) Data frame received for 5 I0326 00:37:41.514197 7 log.go:172] (0xc0010e66e0) (5) Data frame handling I0326 00:37:41.514245 7 log.go:172] (0xc002c264d0) Data frame received for 3 I0326 00:37:41.514283 7 log.go:172] (0xc0010e6640) (3) Data frame handling I0326 00:37:41.515893 7 log.go:172] (0xc002c264d0) Data frame received for 1 I0326 00:37:41.515916 7 log.go:172] (0xc0010e65a0) (1) Data frame handling I0326 00:37:41.515937 7 log.go:172] (0xc0010e65a0) (1) Data frame sent I0326 00:37:41.515963 7 log.go:172] (0xc002c264d0) (0xc0010e65a0) Stream removed, broadcasting: 1 I0326 00:37:41.516110 7 log.go:172] (0xc002c264d0) (0xc0010e65a0) Stream removed, broadcasting: 1 I0326 00:37:41.516143 7 log.go:172] (0xc002c264d0) (0xc0010e6640) Stream removed, broadcasting: 3 I0326 00:37:41.516179 7 log.go:172] (0xc002c264d0) (0xc0010e66e0) Stream removed, broadcasting: 5 Mar 26 00:37:41.516: INFO: Waiting for responses: map[] I0326 00:37:41.516277 7 log.go:172] (0xc002c264d0) Go away received Mar 26 00:37:41.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.199:8080/dial?request=hostname&protocol=udp&host=10.244.1.86&port=8081&tries=1'] Namespace:pod-network-test-4822 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 00:37:41.519: INFO: >>> kubeConfig: /root/.kube/config I0326 00:37:41.553594 7 log.go:172] (0xc0022abce0) (0xc000d94c80) Create stream I0326 00:37:41.553620 7 log.go:172] (0xc0022abce0) (0xc000d94c80) Stream added, broadcasting: 1 I0326 00:37:41.555793 7 log.go:172] (0xc0022abce0) Reply frame received for 1 I0326 00:37:41.555837 7 log.go:172] (0xc0022abce0) (0xc0018f37c0) Create stream I0326 00:37:41.555869 7 log.go:172] (0xc0022abce0) (0xc0018f37c0) Stream added, broadcasting: 3 I0326 00:37:41.557043 7 log.go:172] (0xc0022abce0) Reply frame received for 3 I0326 00:37:41.557093 7 log.go:172] (0xc0022abce0) (0xc0010e68c0) Create stream I0326 00:37:41.557324 7 log.go:172] (0xc0022abce0) (0xc0010e68c0) Stream added, broadcasting: 5 I0326 00:37:41.558543 7 log.go:172] (0xc0022abce0) Reply frame received for 5 I0326 00:37:41.627573 7 log.go:172] (0xc0022abce0) Data frame received for 3 I0326 00:37:41.627622 7 log.go:172] (0xc0018f37c0) (3) Data frame handling I0326 00:37:41.627652 7 log.go:172] (0xc0018f37c0) (3) Data frame sent I0326 00:37:41.628030 7 log.go:172] (0xc0022abce0) Data frame received for 5 I0326 00:37:41.628064 7 log.go:172] (0xc0010e68c0) (5) Data frame handling I0326 00:37:41.628463 7 log.go:172] (0xc0022abce0) Data frame received for 3 I0326 00:37:41.628487 7 log.go:172] (0xc0018f37c0) (3) Data frame handling I0326 00:37:41.630043 7 log.go:172] (0xc0022abce0) Data frame received for 1 I0326 00:37:41.630062 7 log.go:172] (0xc000d94c80) (1) Data frame handling I0326 00:37:41.630082 7 log.go:172] (0xc000d94c80) (1) Data frame sent I0326 00:37:41.630095 7 log.go:172] (0xc0022abce0) (0xc000d94c80) Stream removed, broadcasting: 1 I0326 00:37:41.630113 7 log.go:172] (0xc0022abce0) Go away received I0326 00:37:41.630315 7 log.go:172] (0xc0022abce0) (0xc000d94c80) Stream removed, broadcasting: 1 I0326 00:37:41.630345 7 log.go:172] (0xc0022abce0) (0xc0018f37c0) Stream removed, broadcasting: 3 I0326 00:37:41.630364 7 log.go:172] (0xc0022abce0) (0xc0010e68c0) Stream removed, broadcasting: 5 Mar 26 00:37:41.630: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:37:41.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4822" for this suite. • [SLOW TEST:28.624 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:37:41.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 26 00:37:48.939: INFO: 0 pods remaining Mar 26 00:37:48.939: INFO: 0 pods has nil DeletionTimestamp Mar 26 00:37:48.939: INFO: Mar 26 00:37:49.580: INFO: 0 pods remaining Mar 26 00:37:49.580: INFO: 0 pods has nil DeletionTimestamp Mar 26 00:37:49.580: INFO: Mar 26 00:37:50.209: INFO: 0 pods remaining Mar 26 00:37:50.209: INFO: 0 pods has nil DeletionTimestamp Mar 26 00:37:50.209: INFO: STEP: Gathering metrics W0326 00:37:51.440545 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 00:37:51.440: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:37:51.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4446" for this suite. • [SLOW TEST:10.098 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":222,"skipped":3638,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:37:51.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:37:53.694: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:37:55.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779873, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779873, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779874, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720779873, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:37:58.884: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 26 00:37:58.899: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:37:58.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9749" for this suite. STEP: Destroying namespace "webhook-9749-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.288 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":223,"skipped":3640,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:37:59.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-6c0513c0-d328-439b-8890-37e91983a332 in namespace container-probe-619 Mar 26 00:38:03.144: INFO: Started pod liveness-6c0513c0-d328-439b-8890-37e91983a332 in namespace container-probe-619 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 00:38:03.147: INFO: Initial restart count of pod liveness-6c0513c0-d328-439b-8890-37e91983a332 is 0 Mar 26 00:38:21.187: INFO: Restart count of pod container-probe-619/liveness-6c0513c0-d328-439b-8890-37e91983a332 is now 1 (18.040618836s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:38:21.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-619" for this suite. • [SLOW TEST:22.198 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3656,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:38:21.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:38:21.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc" in namespace "downward-api-8174" to be "Succeeded or Failed" Mar 26 00:38:21.309: INFO: Pod "downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.390903ms Mar 26 00:38:23.314: INFO: Pod "downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008777987s Mar 26 00:38:25.318: INFO: Pod "downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012674729s STEP: Saw pod success Mar 26 00:38:25.318: INFO: Pod "downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc" satisfied condition "Succeeded or Failed" Mar 26 00:38:25.322: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc container client-container: STEP: delete the pod Mar 26 00:38:25.383: INFO: Waiting for pod downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc to disappear Mar 26 00:38:25.386: INFO: Pod downwardapi-volume-985cde8f-6283-4c3b-a0d6-96be15a4dacc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:38:25.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8174" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3656,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:38:25.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 26 00:38:25.434: INFO: >>> kubeConfig: /root/.kube/config Mar 26 00:38:27.352: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:38:37.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7493" for this suite. • [SLOW TEST:12.495 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":226,"skipped":3659,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:38:37.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:38:37.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a" in namespace "downward-api-6813" to be "Succeeded or Failed" Mar 26 00:38:38.004: INFO: Pod "downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605691ms Mar 26 00:38:40.016: INFO: Pod "downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021205287s Mar 26 00:38:42.021: INFO: Pod "downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025632011s STEP: Saw pod success Mar 26 00:38:42.021: INFO: Pod "downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a" satisfied condition "Succeeded or Failed" Mar 26 00:38:42.024: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a container client-container: STEP: delete the pod Mar 26 00:38:42.066: INFO: Waiting for pod downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a to disappear Mar 26 00:38:42.081: INFO: Pod downwardapi-volume-4ddaf1ee-59d5-423c-8249-d84194b3c47a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:38:42.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6813" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3675,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:38:42.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:38:42.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be" in namespace "downward-api-6639" to be "Succeeded or Failed" Mar 26 00:38:42.173: INFO: Pod "downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be": Phase="Pending", Reason="", readiness=false. Elapsed: 5.509308ms Mar 26 00:38:44.177: INFO: Pod "downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009388672s Mar 26 00:38:46.182: INFO: Pod "downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013953576s STEP: Saw pod success Mar 26 00:38:46.182: INFO: Pod "downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be" satisfied condition "Succeeded or Failed" Mar 26 00:38:46.185: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be container client-container: STEP: delete the pod Mar 26 00:38:46.202: INFO: Waiting for pod downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be to disappear Mar 26 00:38:46.206: INFO: Pod downwardapi-volume-2f492b2b-d3e7-4d9e-81b1-b3cfd2ca04be no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:38:46.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6639" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3681,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:38:46.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-817c4003-a8ae-439c-bb50-2dad02f6d433 in namespace container-probe-5964 Mar 26 00:38:50.358: INFO: Started pod test-webserver-817c4003-a8ae-439c-bb50-2dad02f6d433 in namespace container-probe-5964 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 00:38:50.360: INFO: Initial restart count of pod test-webserver-817c4003-a8ae-439c-bb50-2dad02f6d433 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:42:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5964" for this suite. • [SLOW TEST:244.994 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3702,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:42:51.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:42:51.253: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 26 00:42:54.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5381 create -f -' Mar 26 00:42:57.115: INFO: stderr: "" Mar 26 00:42:57.115: INFO: stdout: "e2e-test-crd-publish-openapi-9521-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 26 00:42:57.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5381 delete e2e-test-crd-publish-openapi-9521-crds test-cr' Mar 26 00:42:57.222: INFO: stderr: "" Mar 26 00:42:57.222: INFO: stdout: "e2e-test-crd-publish-openapi-9521-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 26 00:42:57.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5381 apply -f -' Mar 26 00:42:57.468: INFO: stderr: "" Mar 26 00:42:57.468: INFO: stdout: "e2e-test-crd-publish-openapi-9521-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 26 00:42:57.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5381 delete e2e-test-crd-publish-openapi-9521-crds test-cr' Mar 26 00:42:57.562: INFO: stderr: "" Mar 26 00:42:57.562: INFO: stdout: "e2e-test-crd-publish-openapi-9521-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 26 00:42:57.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9521-crds' Mar 26 00:42:57.853: INFO: stderr: "" Mar 26 00:42:57.854: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9521-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:43:00.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5381" for this suite. • [SLOW TEST:9.545 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":230,"skipped":3717,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:43:00.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4843, will wait for the garbage collector to delete the pods Mar 26 00:43:04.907: INFO: Deleting Job.batch foo took: 24.665388ms Mar 26 00:43:05.207: INFO: Terminating Job.batch foo pods took: 300.250852ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:43:43.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4843" for this suite. • [SLOW TEST:42.367 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":231,"skipped":3734,"failed":0} S ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:43:43.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 26 00:43:43.168: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Mar 26 00:43:43.626: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 26 00:43:45.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780223, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780223, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780223, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780223, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 26 00:43:48.491: INFO: Waited 627.783586ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:43:48.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7380" for this suite. • [SLOW TEST:5.926 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":232,"skipped":3735,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:43:49.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:43:55.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4956" for this suite. STEP: Destroying namespace "nsdeletetest-1454" for this suite. Mar 26 00:43:55.699: INFO: Namespace nsdeletetest-1454 was already deleted STEP: Destroying namespace "nsdeletetest-1424" for this suite. • [SLOW TEST:6.657 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":233,"skipped":3741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:43:55.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:43:55.803: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b3066291-b798-408f-b626-733ad8c3212a" in namespace "security-context-test-8843" to be "Succeeded or Failed" Mar 26 00:43:55.813: INFO: Pod "busybox-privileged-false-b3066291-b798-408f-b626-733ad8c3212a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.037708ms Mar 26 00:43:57.837: INFO: Pod "busybox-privileged-false-b3066291-b798-408f-b626-733ad8c3212a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034405344s Mar 26 00:43:59.842: INFO: Pod "busybox-privileged-false-b3066291-b798-408f-b626-733ad8c3212a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03916612s Mar 26 00:43:59.842: INFO: Pod "busybox-privileged-false-b3066291-b798-408f-b626-733ad8c3212a" satisfied condition "Succeeded or Failed" Mar 26 00:43:59.860: INFO: Got logs for pod "busybox-privileged-false-b3066291-b798-408f-b626-733ad8c3212a": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:43:59.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8843" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3765,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:43:59.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 00:44:06.061: INFO: DNS probes using dns-test-14eb1ac6-38fd-4a5a-a211-773c47681051 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 00:44:12.185: INFO: File wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:12.189: INFO: File jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:12.189: INFO: Lookups using dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 failed for: [wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local] Mar 26 00:44:17.194: INFO: File wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:17.196: INFO: File jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:17.196: INFO: Lookups using dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 failed for: [wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local] Mar 26 00:44:22.205: INFO: File wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:22.221: INFO: File jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:22.221: INFO: Lookups using dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 failed for: [wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local] Mar 26 00:44:27.194: INFO: File wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:27.197: INFO: File jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:27.197: INFO: Lookups using dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 failed for: [wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local] Mar 26 00:44:32.194: INFO: File wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:32.197: INFO: File jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 00:44:32.197: INFO: Lookups using dns-8672/dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 failed for: [wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local] Mar 26 00:44:37.196: INFO: DNS probes using dns-test-6a5d0f3c-a08f-44e8-acae-fbdb32a223d7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 00:44:43.691: INFO: DNS probes using dns-test-1186be63-9528-4799-b0bc-680f53b90e1c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:44:43.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8672" for this suite. • [SLOW TEST:43.860 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":235,"skipped":3778,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:44:43.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 26 00:44:48.425: INFO: Successfully updated pod "labelsupdate8647bb7c-ac04-4817-9ef0-681af531449f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:44:50.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1723" for this suite. • [SLOW TEST:6.689 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3793,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:44:50.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:45:01.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8593" for this suite. • [SLOW TEST:11.141 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":237,"skipped":3793,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:45:01.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-77097527-d86d-4699-b18b-0d4221fc89a0 STEP: Creating a pod to test consume secrets Mar 26 00:45:01.670: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562" in namespace "projected-4275" to be "Succeeded or Failed" Mar 26 00:45:01.674: INFO: Pod "pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562": Phase="Pending", Reason="", readiness=false. Elapsed: 3.85892ms Mar 26 00:45:03.678: INFO: Pod "pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008153467s Mar 26 00:45:05.682: INFO: Pod "pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012176414s STEP: Saw pod success Mar 26 00:45:05.682: INFO: Pod "pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562" satisfied condition "Succeeded or Failed" Mar 26 00:45:05.686: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562 container secret-volume-test: STEP: delete the pod Mar 26 00:45:05.727: INFO: Waiting for pod pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562 to disappear Mar 26 00:45:05.733: INFO: Pod pod-projected-secrets-183bac7c-eeb7-4fea-9a8a-105bc2d72562 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:45:05.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4275" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":3804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:45:05.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:45:09.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6761" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":3842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:45:09.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0326 00:45:20.693420 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 00:45:20.693: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:45:20.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8797" for this suite. • [SLOW TEST:10.856 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":240,"skipped":3872,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:45:20.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-7f109581-ce5c-4744-b34b-9d13e1c415ed STEP: Creating a pod to test consume configMaps Mar 26 00:45:20.998: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67" in namespace "projected-6794" to be "Succeeded or Failed" Mar 26 00:45:21.043: INFO: Pod "pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67": Phase="Pending", Reason="", readiness=false. Elapsed: 45.858603ms Mar 26 00:45:23.048: INFO: Pod "pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050198701s Mar 26 00:45:25.052: INFO: Pod "pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054500659s STEP: Saw pod success Mar 26 00:45:25.052: INFO: Pod "pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67" satisfied condition "Succeeded or Failed" Mar 26 00:45:25.056: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67 container projected-configmap-volume-test: STEP: delete the pod Mar 26 00:45:25.074: INFO: Waiting for pod pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67 to disappear Mar 26 00:45:25.106: INFO: Pod pod-projected-configmaps-07be8ff6-1e82-44af-8735-a9df7224ed67 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:45:25.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6794" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":3873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:45:25.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-59705d24-f401-49e5-a285-10d6daa794e5 STEP: Creating configMap with name cm-test-opt-upd-6000c507-08d2-48a0-bea1-3a1984cbaf49 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-59705d24-f401-49e5-a285-10d6daa794e5 STEP: Updating configmap cm-test-opt-upd-6000c507-08d2-48a0-bea1-3a1984cbaf49 STEP: Creating configMap with name cm-test-opt-create-25868f12-3acd-4b0c-b0dd-57beb2a93e78 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:46:43.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7078" for this suite. • [SLOW TEST:78.553 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":3906,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:46:43.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-dadd10e0-e03e-4ff0-a472-f20fba015570 STEP: Creating a pod to test consume configMaps Mar 26 00:46:43.748: INFO: Waiting up to 5m0s for pod "pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26" in namespace "configmap-6373" to be "Succeeded or Failed" Mar 26 00:46:43.753: INFO: Pod "pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.935043ms Mar 26 00:46:45.758: INFO: Pod "pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009333538s Mar 26 00:46:47.762: INFO: Pod "pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013418156s STEP: Saw pod success Mar 26 00:46:47.762: INFO: Pod "pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26" satisfied condition "Succeeded or Failed" Mar 26 00:46:47.765: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26 container configmap-volume-test: STEP: delete the pod Mar 26 00:46:47.795: INFO: Waiting for pod pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26 to disappear Mar 26 00:46:47.806: INFO: Pod pod-configmaps-beb64c91-617f-4b86-87d4-07bf40fbde26 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:46:47.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6373" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":3926,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:46:47.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:46:48.650: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:46:50.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 26 00:46:52.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780408, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:46:55.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:05.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3788" for this suite. STEP: Destroying namespace "webhook-3788-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.123 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":244,"skipped":3942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:05.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 26 00:47:10.529: INFO: Successfully updated pod "annotationupdate1345bbed-054c-44ff-badc-271e5a22d13f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:12.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6467" for this suite. • [SLOW TEST:6.627 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":3976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:12.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0326 00:47:22.729391 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 00:47:22.729: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:22.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3533" for this suite. • [SLOW TEST:10.179 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":246,"skipped":4009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:22.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:27.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8170" for this suite. • [SLOW TEST:5.105 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":247,"skipped":4034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:27.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:27.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-456" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":248,"skipped":4126,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:27.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 26 00:47:36.077: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 26 00:47:36.083: INFO: Pod pod-with-prestop-http-hook still exists Mar 26 00:47:38.083: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 26 00:47:38.086: INFO: Pod pod-with-prestop-http-hook still exists Mar 26 00:47:40.083: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 26 00:47:40.087: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:40.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4786" for this suite. • [SLOW TEST:12.156 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4134,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:40.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:47:40.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9" in namespace "projected-653" to be "Succeeded or Failed" Mar 26 00:47:40.204: INFO: Pod "downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.649039ms Mar 26 00:47:42.208: INFO: Pod "downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013152188s Mar 26 00:47:44.213: INFO: Pod "downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017902378s STEP: Saw pod success Mar 26 00:47:44.213: INFO: Pod "downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9" satisfied condition "Succeeded or Failed" Mar 26 00:47:44.216: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9 container client-container: STEP: delete the pod Mar 26 00:47:44.235: INFO: Waiting for pod downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9 to disappear Mar 26 00:47:44.276: INFO: Pod downwardapi-volume-d2d3eb3c-b2e3-4186-bb16-01967b4743c9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:44.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-653" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4138,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:44.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:47:44.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84" in namespace "projected-9924" to be "Succeeded or Failed" Mar 26 00:47:44.342: INFO: Pod "downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84": Phase="Pending", Reason="", readiness=false. Elapsed: 3.004743ms Mar 26 00:47:46.346: INFO: Pod "downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007287947s Mar 26 00:47:48.350: INFO: Pod "downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011482282s STEP: Saw pod success Mar 26 00:47:48.350: INFO: Pod "downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84" satisfied condition "Succeeded or Failed" Mar 26 00:47:48.353: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84 container client-container: STEP: delete the pod Mar 26 00:47:48.390: INFO: Waiting for pod downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84 to disappear Mar 26 00:47:48.392: INFO: Pod downwardapi-volume-0bb8e88a-c8bd-4839-ab1a-a458760e2c84 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:48.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9924" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:48.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 26 00:47:48.446: INFO: Waiting up to 5m0s for pod "downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8" in namespace "downward-api-4717" to be "Succeeded or Failed" Mar 26 00:47:48.450: INFO: Pod "downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551172ms Mar 26 00:47:50.453: INFO: Pod "downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006908232s Mar 26 00:47:52.457: INFO: Pod "downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010349618s STEP: Saw pod success Mar 26 00:47:52.457: INFO: Pod "downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8" satisfied condition "Succeeded or Failed" Mar 26 00:47:52.460: INFO: Trying to get logs from node latest-worker pod downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8 container dapi-container: STEP: delete the pod Mar 26 00:47:52.494: INFO: Waiting for pod downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8 to disappear Mar 26 00:47:52.508: INFO: Pod downward-api-3301c48b-b376-440c-8b83-f5c13bf63ae8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:47:52.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4717" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4171,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:47:52.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 26 00:47:52.566: INFO: >>> kubeConfig: /root/.kube/config Mar 26 00:47:55.467: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:48:05.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8298" for this suite. • [SLOW TEST:13.413 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":253,"skipped":4178,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:48:05.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Mar 26 00:48:06.045: INFO: Waiting up to 5m0s for pod "var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53" in namespace "var-expansion-1167" to be "Succeeded or Failed" Mar 26 00:48:06.049: INFO: Pod "var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.777647ms Mar 26 00:48:08.053: INFO: Pod "var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007533372s Mar 26 00:48:10.061: INFO: Pod "var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015463139s STEP: Saw pod success Mar 26 00:48:10.061: INFO: Pod "var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53" satisfied condition "Succeeded or Failed" Mar 26 00:48:10.064: INFO: Trying to get logs from node latest-worker2 pod var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53 container dapi-container: STEP: delete the pod Mar 26 00:48:10.140: INFO: Waiting for pod var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53 to disappear Mar 26 00:48:10.151: INFO: Pod var-expansion-b993ddaa-f416-4dc1-aa37-27f5118c5b53 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:48:10.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1167" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4184,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:48:10.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 26 00:48:10.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1524' Mar 26 00:48:10.546: INFO: stderr: "" Mar 26 00:48:10.546: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 26 00:48:10.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1524' Mar 26 00:48:10.678: INFO: stderr: "" Mar 26 00:48:10.678: INFO: stdout: "update-demo-nautilus-8lrhz update-demo-nautilus-bmf6r " Mar 26 00:48:10.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8lrhz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1524' Mar 26 00:48:10.771: INFO: stderr: "" Mar 26 00:48:10.771: INFO: stdout: "" Mar 26 00:48:10.771: INFO: update-demo-nautilus-8lrhz is created but not running Mar 26 00:48:15.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1524' Mar 26 00:48:15.881: INFO: stderr: "" Mar 26 00:48:15.881: INFO: stdout: "update-demo-nautilus-8lrhz update-demo-nautilus-bmf6r " Mar 26 00:48:15.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8lrhz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1524' Mar 26 00:48:15.984: INFO: stderr: "" Mar 26 00:48:15.984: INFO: stdout: "true" Mar 26 00:48:15.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8lrhz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1524' Mar 26 00:48:16.082: INFO: stderr: "" Mar 26 00:48:16.082: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 26 00:48:16.082: INFO: validating pod update-demo-nautilus-8lrhz Mar 26 00:48:16.086: INFO: got data: { "image": "nautilus.jpg" } Mar 26 00:48:16.086: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 26 00:48:16.086: INFO: update-demo-nautilus-8lrhz is verified up and running Mar 26 00:48:16.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmf6r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1524' Mar 26 00:48:16.181: INFO: stderr: "" Mar 26 00:48:16.181: INFO: stdout: "true" Mar 26 00:48:16.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmf6r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1524' Mar 26 00:48:16.278: INFO: stderr: "" Mar 26 00:48:16.278: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 26 00:48:16.278: INFO: validating pod update-demo-nautilus-bmf6r Mar 26 00:48:16.282: INFO: got data: { "image": "nautilus.jpg" } Mar 26 00:48:16.282: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 26 00:48:16.282: INFO: update-demo-nautilus-bmf6r is verified up and running STEP: using delete to clean up resources Mar 26 00:48:16.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1524' Mar 26 00:48:16.393: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:48:16.393: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 26 00:48:16.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1524' Mar 26 00:48:16.494: INFO: stderr: "No resources found in kubectl-1524 namespace.\n" Mar 26 00:48:16.494: INFO: stdout: "" Mar 26 00:48:16.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1524 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 26 00:48:16.585: INFO: stderr: "" Mar 26 00:48:16.585: INFO: stdout: "update-demo-nautilus-8lrhz\nupdate-demo-nautilus-bmf6r\n" Mar 26 00:48:17.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1524' Mar 26 00:48:17.185: INFO: stderr: "No resources found in kubectl-1524 namespace.\n" Mar 26 00:48:17.186: INFO: stdout: "" Mar 26 00:48:17.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1524 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 26 00:48:17.279: INFO: stderr: "" Mar 26 00:48:17.279: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:48:17.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1524" for this suite. • [SLOW TEST:7.128 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":255,"skipped":4196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:48:17.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 26 00:48:17.601: INFO: Waiting up to 5m0s for pod "pod-8d18aee8-1a31-4558-95d6-342fbe85b072" in namespace "emptydir-8007" to be "Succeeded or Failed" Mar 26 00:48:17.611: INFO: Pod "pod-8d18aee8-1a31-4558-95d6-342fbe85b072": Phase="Pending", Reason="", readiness=false. Elapsed: 9.985482ms Mar 26 00:48:19.615: INFO: Pod "pod-8d18aee8-1a31-4558-95d6-342fbe85b072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014221512s Mar 26 00:48:21.619: INFO: Pod "pod-8d18aee8-1a31-4558-95d6-342fbe85b072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018330868s STEP: Saw pod success Mar 26 00:48:21.620: INFO: Pod "pod-8d18aee8-1a31-4558-95d6-342fbe85b072" satisfied condition "Succeeded or Failed" Mar 26 00:48:21.622: INFO: Trying to get logs from node latest-worker pod pod-8d18aee8-1a31-4558-95d6-342fbe85b072 container test-container: STEP: delete the pod Mar 26 00:48:21.660: INFO: Waiting for pod pod-8d18aee8-1a31-4558-95d6-342fbe85b072 to disappear Mar 26 00:48:21.665: INFO: Pod pod-8d18aee8-1a31-4558-95d6-342fbe85b072 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:48:21.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8007" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:48:21.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-9048 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9048 to expose endpoints map[] Mar 26 00:48:21.830: INFO: Get endpoints failed (3.036634ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 26 00:48:22.834: INFO: successfully validated that service multi-endpoint-test in namespace services-9048 exposes endpoints map[] (1.00677575s elapsed) STEP: Creating pod pod1 in namespace services-9048 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9048 to expose endpoints map[pod1:[100]] Mar 26 00:48:25.888: INFO: successfully validated that service multi-endpoint-test in namespace services-9048 exposes endpoints map[pod1:[100]] (3.046993166s elapsed) STEP: Creating pod pod2 in namespace services-9048 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9048 to expose endpoints map[pod1:[100] pod2:[101]] Mar 26 00:48:28.976: INFO: successfully validated that service multi-endpoint-test in namespace services-9048 exposes endpoints map[pod1:[100] pod2:[101]] (3.083359452s elapsed) STEP: Deleting pod pod1 in namespace services-9048 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9048 to expose endpoints map[pod2:[101]] Mar 26 00:48:29.011: INFO: successfully validated that service multi-endpoint-test in namespace services-9048 exposes endpoints map[pod2:[101]] (25.350204ms elapsed) STEP: Deleting pod pod2 in namespace services-9048 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9048 to expose endpoints map[] Mar 26 00:48:30.030: INFO: successfully validated that service multi-endpoint-test in namespace services-9048 exposes endpoints map[] (1.014196551s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:48:30.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9048" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.432 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":257,"skipped":4276,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:48:30.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 26 00:48:30.165: INFO: Waiting up to 5m0s for pod "pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4" in namespace "emptydir-1591" to be "Succeeded or Failed" Mar 26 00:48:30.169: INFO: Pod "pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21234ms Mar 26 00:48:32.173: INFO: Pod "pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008186263s Mar 26 00:48:34.177: INFO: Pod "pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012172844s STEP: Saw pod success Mar 26 00:48:34.177: INFO: Pod "pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4" satisfied condition "Succeeded or Failed" Mar 26 00:48:34.180: INFO: Trying to get logs from node latest-worker pod pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4 container test-container: STEP: delete the pod Mar 26 00:48:34.236: INFO: Waiting for pod pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4 to disappear Mar 26 00:48:34.252: INFO: Pod pod-cd1e18d2-d364-46f8-b40b-6f2b9c261dd4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:48:34.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1591" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:48:34.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 26 00:48:34.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8689' Mar 26 00:48:34.603: INFO: stderr: "" Mar 26 00:48:34.604: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 26 00:48:34.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:48:34.727: INFO: stderr: "" Mar 26 00:48:34.727: INFO: stdout: "update-demo-nautilus-h6hcp update-demo-nautilus-vxhwt " Mar 26 00:48:34.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6hcp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:34.826: INFO: stderr: "" Mar 26 00:48:34.826: INFO: stdout: "" Mar 26 00:48:34.826: INFO: update-demo-nautilus-h6hcp is created but not running Mar 26 00:48:39.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:48:39.936: INFO: stderr: "" Mar 26 00:48:39.936: INFO: stdout: "update-demo-nautilus-h6hcp update-demo-nautilus-vxhwt " Mar 26 00:48:39.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6hcp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:40.033: INFO: stderr: "" Mar 26 00:48:40.033: INFO: stdout: "true" Mar 26 00:48:40.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6hcp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:40.128: INFO: stderr: "" Mar 26 00:48:40.128: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 26 00:48:40.128: INFO: validating pod update-demo-nautilus-h6hcp Mar 26 00:48:40.132: INFO: got data: { "image": "nautilus.jpg" } Mar 26 00:48:40.132: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 26 00:48:40.132: INFO: update-demo-nautilus-h6hcp is verified up and running Mar 26 00:48:40.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxhwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:40.219: INFO: stderr: "" Mar 26 00:48:40.219: INFO: stdout: "true" Mar 26 00:48:40.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxhwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:40.298: INFO: stderr: "" Mar 26 00:48:40.298: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 26 00:48:40.298: INFO: validating pod update-demo-nautilus-vxhwt Mar 26 00:48:40.301: INFO: got data: { "image": "nautilus.jpg" } Mar 26 00:48:40.301: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 26 00:48:40.301: INFO: update-demo-nautilus-vxhwt is verified up and running STEP: scaling down the replication controller Mar 26 00:48:40.304: INFO: scanned /root for discovery docs: Mar 26 00:48:40.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8689' Mar 26 00:48:41.461: INFO: stderr: "" Mar 26 00:48:41.461: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 26 00:48:41.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:48:41.560: INFO: stderr: "" Mar 26 00:48:41.560: INFO: stdout: "update-demo-nautilus-h6hcp update-demo-nautilus-vxhwt " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 26 00:48:46.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:48:46.658: INFO: stderr: "" Mar 26 00:48:46.658: INFO: stdout: "update-demo-nautilus-h6hcp update-demo-nautilus-vxhwt " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 26 00:48:51.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:48:51.761: INFO: stderr: "" Mar 26 00:48:51.761: INFO: stdout: "update-demo-nautilus-h6hcp update-demo-nautilus-vxhwt " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 26 00:48:56.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:48:56.855: INFO: stderr: "" Mar 26 00:48:56.855: INFO: stdout: "update-demo-nautilus-vxhwt " Mar 26 00:48:56.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxhwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:56.951: INFO: stderr: "" Mar 26 00:48:56.951: INFO: stdout: "true" Mar 26 00:48:56.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxhwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:57.053: INFO: stderr: "" Mar 26 00:48:57.053: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 26 00:48:57.053: INFO: validating pod update-demo-nautilus-vxhwt Mar 26 00:48:57.074: INFO: got data: { "image": "nautilus.jpg" } Mar 26 00:48:57.074: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 26 00:48:57.074: INFO: update-demo-nautilus-vxhwt is verified up and running STEP: scaling up the replication controller Mar 26 00:48:57.077: INFO: scanned /root for discovery docs: Mar 26 00:48:57.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8689' Mar 26 00:48:58.236: INFO: stderr: "" Mar 26 00:48:58.236: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 26 00:48:58.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:48:58.336: INFO: stderr: "" Mar 26 00:48:58.336: INFO: stdout: "update-demo-nautilus-f5cs9 update-demo-nautilus-vxhwt " Mar 26 00:48:58.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5cs9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:48:58.426: INFO: stderr: "" Mar 26 00:48:58.426: INFO: stdout: "" Mar 26 00:48:58.426: INFO: update-demo-nautilus-f5cs9 is created but not running Mar 26 00:49:03.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8689' Mar 26 00:49:03.518: INFO: stderr: "" Mar 26 00:49:03.518: INFO: stdout: "update-demo-nautilus-f5cs9 update-demo-nautilus-vxhwt " Mar 26 00:49:03.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5cs9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:49:03.610: INFO: stderr: "" Mar 26 00:49:03.610: INFO: stdout: "true" Mar 26 00:49:03.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5cs9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:49:03.703: INFO: stderr: "" Mar 26 00:49:03.704: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 26 00:49:03.704: INFO: validating pod update-demo-nautilus-f5cs9 Mar 26 00:49:03.708: INFO: got data: { "image": "nautilus.jpg" } Mar 26 00:49:03.708: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 26 00:49:03.708: INFO: update-demo-nautilus-f5cs9 is verified up and running Mar 26 00:49:03.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxhwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:49:03.810: INFO: stderr: "" Mar 26 00:49:03.810: INFO: stdout: "true" Mar 26 00:49:03.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxhwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8689' Mar 26 00:49:03.908: INFO: stderr: "" Mar 26 00:49:03.908: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 26 00:49:03.908: INFO: validating pod update-demo-nautilus-vxhwt Mar 26 00:49:03.912: INFO: got data: { "image": "nautilus.jpg" } Mar 26 00:49:03.912: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 26 00:49:03.912: INFO: update-demo-nautilus-vxhwt is verified up and running STEP: using delete to clean up resources Mar 26 00:49:03.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8689' Mar 26 00:49:04.018: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 26 00:49:04.018: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 26 00:49:04.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8689' Mar 26 00:49:04.120: INFO: stderr: "No resources found in kubectl-8689 namespace.\n" Mar 26 00:49:04.120: INFO: stdout: "" Mar 26 00:49:04.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8689 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 26 00:49:04.210: INFO: stderr: "" Mar 26 00:49:04.210: INFO: stdout: "update-demo-nautilus-f5cs9\nupdate-demo-nautilus-vxhwt\n" Mar 26 00:49:04.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8689' Mar 26 00:49:04.807: INFO: stderr: "No resources found in kubectl-8689 namespace.\n" Mar 26 00:49:04.807: INFO: stdout: "" Mar 26 00:49:04.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8689 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 26 00:49:04.909: INFO: stderr: "" Mar 26 00:49:04.909: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:49:04.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8689" for this suite. • [SLOW TEST:30.656 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":259,"skipped":4318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:49:04.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-3edc59a4-28bb-415a-8d5b-e61cd10ca2b3 STEP: Creating a pod to test consume configMaps Mar 26 00:49:05.134: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43" in namespace "configmap-8304" to be "Succeeded or Failed" Mar 26 00:49:05.137: INFO: Pod "pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.851804ms Mar 26 00:49:07.158: INFO: Pod "pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024172525s Mar 26 00:49:09.161: INFO: Pod "pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027517126s STEP: Saw pod success Mar 26 00:49:09.161: INFO: Pod "pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43" satisfied condition "Succeeded or Failed" Mar 26 00:49:09.164: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43 container configmap-volume-test: STEP: delete the pod Mar 26 00:49:09.182: INFO: Waiting for pod pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43 to disappear Mar 26 00:49:09.187: INFO: Pod pod-configmaps-5fb2a806-ca59-4b3a-8832-1104ba529e43 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:49:09.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8304" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4378,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:49:09.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-eb6be702-36c9-42d9-8c6d-0df694bd9a0f STEP: Creating a pod to test consume configMaps Mar 26 00:49:09.322: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d" in namespace "projected-6545" to be "Succeeded or Failed" Mar 26 00:49:09.325: INFO: Pod "pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.999468ms Mar 26 00:49:11.329: INFO: Pod "pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006819932s Mar 26 00:49:13.333: INFO: Pod "pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010804306s STEP: Saw pod success Mar 26 00:49:13.333: INFO: Pod "pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d" satisfied condition "Succeeded or Failed" Mar 26 00:49:13.336: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d container projected-configmap-volume-test: STEP: delete the pod Mar 26 00:49:13.368: INFO: Waiting for pod pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d to disappear Mar 26 00:49:13.391: INFO: Pod pod-projected-configmaps-f8339de6-9d3a-40aa-a620-7aec8bcc4c1d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:49:13.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6545" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4397,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:49:13.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1236 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1236 STEP: creating replication controller externalsvc in namespace services-1236 I0326 00:49:13.638688 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1236, replica count: 2 I0326 00:49:16.689418 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 00:49:19.689705 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 26 00:49:19.735: INFO: Creating new exec pod Mar 26 00:49:23.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1236 execpodv9ltw -- /bin/sh -x -c nslookup nodeport-service' Mar 26 00:49:23.966: INFO: stderr: "I0326 00:49:23.877464 3382 log.go:172] (0xc00053a630) (0xc0008d4000) Create stream\nI0326 00:49:23.877523 3382 log.go:172] (0xc00053a630) (0xc0008d4000) Stream added, broadcasting: 1\nI0326 00:49:23.880420 3382 log.go:172] (0xc00053a630) Reply frame received for 1\nI0326 00:49:23.880471 3382 log.go:172] (0xc00053a630) (0xc00099c000) Create stream\nI0326 00:49:23.880485 3382 log.go:172] (0xc00053a630) (0xc00099c000) Stream added, broadcasting: 3\nI0326 00:49:23.881910 3382 log.go:172] (0xc00053a630) Reply frame received for 3\nI0326 00:49:23.881969 3382 log.go:172] (0xc00053a630) (0xc0008d4140) Create stream\nI0326 00:49:23.882000 3382 log.go:172] (0xc00053a630) (0xc0008d4140) Stream added, broadcasting: 5\nI0326 00:49:23.883077 3382 log.go:172] (0xc00053a630) Reply frame received for 5\nI0326 00:49:23.948931 3382 log.go:172] (0xc00053a630) Data frame received for 5\nI0326 00:49:23.948959 3382 log.go:172] (0xc0008d4140) (5) Data frame handling\nI0326 00:49:23.948979 3382 log.go:172] (0xc0008d4140) (5) Data frame sent\n+ nslookup nodeport-service\nI0326 00:49:23.958848 3382 log.go:172] (0xc00053a630) Data frame received for 3\nI0326 00:49:23.958874 3382 log.go:172] (0xc00099c000) (3) Data frame handling\nI0326 00:49:23.958904 3382 log.go:172] (0xc00099c000) (3) Data frame sent\nI0326 00:49:23.959699 3382 log.go:172] (0xc00053a630) Data frame received for 3\nI0326 00:49:23.959740 3382 log.go:172] (0xc00099c000) (3) Data frame handling\nI0326 00:49:23.959766 3382 log.go:172] (0xc00099c000) (3) Data frame sent\nI0326 00:49:23.960130 3382 log.go:172] (0xc00053a630) Data frame received for 5\nI0326 00:49:23.960159 3382 log.go:172] (0xc0008d4140) (5) Data frame handling\nI0326 00:49:23.960184 3382 log.go:172] (0xc00053a630) Data frame received for 3\nI0326 00:49:23.960207 3382 log.go:172] (0xc00099c000) (3) Data frame handling\nI0326 00:49:23.962138 3382 log.go:172] (0xc00053a630) Data frame received for 1\nI0326 00:49:23.962162 3382 log.go:172] (0xc0008d4000) (1) Data frame handling\nI0326 00:49:23.962181 3382 log.go:172] (0xc0008d4000) (1) Data frame sent\nI0326 00:49:23.962205 3382 log.go:172] (0xc00053a630) (0xc0008d4000) Stream removed, broadcasting: 1\nI0326 00:49:23.962263 3382 log.go:172] (0xc00053a630) Go away received\nI0326 00:49:23.962642 3382 log.go:172] (0xc00053a630) (0xc0008d4000) Stream removed, broadcasting: 1\nI0326 00:49:23.962667 3382 log.go:172] (0xc00053a630) (0xc00099c000) Stream removed, broadcasting: 3\nI0326 00:49:23.962682 3382 log.go:172] (0xc00053a630) (0xc0008d4140) Stream removed, broadcasting: 5\n" Mar 26 00:49:23.966: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1236.svc.cluster.local\tcanonical name = externalsvc.services-1236.svc.cluster.local.\nName:\texternalsvc.services-1236.svc.cluster.local\nAddress: 10.96.46.81\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1236, will wait for the garbage collector to delete the pods Mar 26 00:49:24.032: INFO: Deleting ReplicationController externalsvc took: 11.945013ms Mar 26 00:49:24.332: INFO: Terminating ReplicationController externalsvc pods took: 300.277386ms Mar 26 00:49:33.070: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:49:33.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1236" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:19.698 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":262,"skipped":4416,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:49:33.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:49:33.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Mar 26 00:49:33.276: INFO: stderr: "" Mar 26 00:49:33.276: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:49:33.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1498" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":263,"skipped":4418,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:49:33.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 00:49:34.315: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 00:49:36.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780574, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780574, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780574, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720780574, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 00:49:39.536: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:49:39.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8048-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:49:40.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4276" for this suite. STEP: Destroying namespace "webhook-4276-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.490 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":264,"skipped":4439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:49:40.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 26 00:49:40.916: INFO: Waiting up to 5m0s for pod "pod-195f5be4-5383-4255-8477-327bc29e12a8" in namespace "emptydir-2986" to be "Succeeded or Failed" Mar 26 00:49:40.918: INFO: Pod "pod-195f5be4-5383-4255-8477-327bc29e12a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752168ms Mar 26 00:49:42.922: INFO: Pod "pod-195f5be4-5383-4255-8477-327bc29e12a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00671652s Mar 26 00:49:44.927: INFO: Pod "pod-195f5be4-5383-4255-8477-327bc29e12a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010963189s STEP: Saw pod success Mar 26 00:49:44.927: INFO: Pod "pod-195f5be4-5383-4255-8477-327bc29e12a8" satisfied condition "Succeeded or Failed" Mar 26 00:49:44.930: INFO: Trying to get logs from node latest-worker pod pod-195f5be4-5383-4255-8477-327bc29e12a8 container test-container: STEP: delete the pod Mar 26 00:49:44.962: INFO: Waiting for pod pod-195f5be4-5383-4255-8477-327bc29e12a8 to disappear Mar 26 00:49:44.973: INFO: Pod pod-195f5be4-5383-4255-8477-327bc29e12a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:49:44.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2986" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4463,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:49:44.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 26 00:49:45.577: INFO: Pod name wrapped-volume-race-3d6d4860-39e3-4882-9aa9-850161eb0b1e: Found 0 pods out of 5 Mar 26 00:49:50.585: INFO: Pod name wrapped-volume-race-3d6d4860-39e3-4882-9aa9-850161eb0b1e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3d6d4860-39e3-4882-9aa9-850161eb0b1e in namespace emptydir-wrapper-4171, will wait for the garbage collector to delete the pods Mar 26 00:50:02.671: INFO: Deleting ReplicationController wrapped-volume-race-3d6d4860-39e3-4882-9aa9-850161eb0b1e took: 7.650663ms Mar 26 00:50:02.972: INFO: Terminating ReplicationController wrapped-volume-race-3d6d4860-39e3-4882-9aa9-850161eb0b1e pods took: 300.366024ms STEP: Creating RC which spawns configmap-volume pods Mar 26 00:50:13.823: INFO: Pod name wrapped-volume-race-3cc73f29-249f-46e5-a14f-490c003c5353: Found 0 pods out of 5 Mar 26 00:50:18.832: INFO: Pod name wrapped-volume-race-3cc73f29-249f-46e5-a14f-490c003c5353: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3cc73f29-249f-46e5-a14f-490c003c5353 in namespace emptydir-wrapper-4171, will wait for the garbage collector to delete the pods Mar 26 00:50:31.020: INFO: Deleting ReplicationController wrapped-volume-race-3cc73f29-249f-46e5-a14f-490c003c5353 took: 7.947519ms Mar 26 00:50:31.420: INFO: Terminating ReplicationController wrapped-volume-race-3cc73f29-249f-46e5-a14f-490c003c5353 pods took: 400.317579ms STEP: Creating RC which spawns configmap-volume pods Mar 26 00:50:43.249: INFO: Pod name wrapped-volume-race-e17f365d-e19f-41ad-b821-b61721e9c566: Found 0 pods out of 5 Mar 26 00:50:48.258: INFO: Pod name wrapped-volume-race-e17f365d-e19f-41ad-b821-b61721e9c566: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e17f365d-e19f-41ad-b821-b61721e9c566 in namespace emptydir-wrapper-4171, will wait for the garbage collector to delete the pods Mar 26 00:51:02.347: INFO: Deleting ReplicationController wrapped-volume-race-e17f365d-e19f-41ad-b821-b61721e9c566 took: 6.227333ms Mar 26 00:51:02.747: INFO: Terminating ReplicationController wrapped-volume-race-e17f365d-e19f-41ad-b821-b61721e9c566 pods took: 400.267204ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:51:13.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4171" for this suite. • [SLOW TEST:88.734 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":266,"skipped":4483,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:51:13.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Mar 26 00:51:13.820: INFO: Waiting up to 5m0s for pod "var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8" in namespace "var-expansion-5618" to be "Succeeded or Failed" Mar 26 00:51:13.823: INFO: Pod "var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349009ms Mar 26 00:51:15.827: INFO: Pod "var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006239992s Mar 26 00:51:17.831: INFO: Pod "var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010592821s STEP: Saw pod success Mar 26 00:51:17.831: INFO: Pod "var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8" satisfied condition "Succeeded or Failed" Mar 26 00:51:17.834: INFO: Trying to get logs from node latest-worker pod var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8 container dapi-container: STEP: delete the pod Mar 26 00:51:17.867: INFO: Waiting for pod var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8 to disappear Mar 26 00:51:17.871: INFO: Pod var-expansion-2b9fa441-62d4-4c93-922e-ee17f2ceb3a8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:51:17.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5618" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4501,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:51:17.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-120.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-120.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 00:51:24.044: INFO: DNS probes using dns-120/dns-test-9fdaf678-24ea-48f6-811e-a19559f4667d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:51:24.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-120" for this suite. • [SLOW TEST:6.319 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":268,"skipped":4513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:51:24.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:51:24.680: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f6b7207d-b82c-4b42-9399-26cd7070013c", Controller:(*bool)(0xc00329472a), BlockOwnerDeletion:(*bool)(0xc00329472b)}} Mar 26 00:51:24.721: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"92a2f7ac-6428-4dcb-a450-5c9e9395adc8", Controller:(*bool)(0xc002de4542), BlockOwnerDeletion:(*bool)(0xc002de4543)}} Mar 26 00:51:24.751: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"04d7faf9-c151-4ed9-8b81-57dad179a89d", Controller:(*bool)(0xc002de46ea), BlockOwnerDeletion:(*bool)(0xc002de46eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:51:29.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3336" for this suite. • [SLOW TEST:5.599 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":269,"skipped":4573,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:51:29.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 00:51:29.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67" in namespace "downward-api-7118" to be "Succeeded or Failed" Mar 26 00:51:29.866: INFO: Pod "downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67": Phase="Pending", Reason="", readiness=false. Elapsed: 3.542904ms Mar 26 00:51:31.870: INFO: Pod "downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007651072s Mar 26 00:51:33.874: INFO: Pod "downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011696883s STEP: Saw pod success Mar 26 00:51:33.874: INFO: Pod "downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67" satisfied condition "Succeeded or Failed" Mar 26 00:51:33.877: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67 container client-container: STEP: delete the pod Mar 26 00:51:33.924: INFO: Waiting for pod downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67 to disappear Mar 26 00:51:33.932: INFO: Pod downwardapi-volume-782c75bc-e711-452a-a6e4-092bbbed2f67 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:51:33.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7118" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4587,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:51:33.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:51:45.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3178" for this suite. • [SLOW TEST:11.086 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":271,"skipped":4596,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:51:45.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2896 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2896 STEP: Creating statefulset with conflicting port in namespace statefulset-2896 STEP: Waiting until pod test-pod will start running in namespace statefulset-2896 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2896 Mar 26 00:51:49.150: INFO: Observed stateful pod in namespace: statefulset-2896, name: ss-0, uid: 8482017d-4f6d-4d76-89b6-be37e6160449, status phase: Pending. Waiting for statefulset controller to delete. Mar 26 00:51:52.726: INFO: Observed stateful pod in namespace: statefulset-2896, name: ss-0, uid: 8482017d-4f6d-4d76-89b6-be37e6160449, status phase: Failed. Waiting for statefulset controller to delete. Mar 26 00:51:52.731: INFO: Observed stateful pod in namespace: statefulset-2896, name: ss-0, uid: 8482017d-4f6d-4d76-89b6-be37e6160449, status phase: Failed. Waiting for statefulset controller to delete. Mar 26 00:51:52.740: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2896 STEP: Removing pod with conflicting port in namespace statefulset-2896 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2896 and will be in running state Mar 26 00:56:52.824: FAIL: Timed out after 300.000s. Expected <*errors.errorString | 0xc004e5e960>: { s: "pod ss-0 is not in running phase: Pending", } to be nil Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782 +0x11df k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e65400) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc001e65400) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc001e65400, 0x4ae7658) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 26 00:56:52.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-2896' Mar 26 00:56:55.466: INFO: stderr: "" Mar 26 00:56:55.466: INFO: stdout: "Name: ss-0\nNamespace: statefulset-2896\nPriority: 0\nNode: latest-worker/\nLabels: baz=blah\n controller-revision-hash=ss-84f8fd7c56\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nIPs: \nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Image: docker.io/library/httpd:2.4.38-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-jwhtn (ro)\nVolumes:\n default-token-jwhtn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-jwhtn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m3s kubelet, latest-worker Predicate PodFitsHostPorts failed\n" Mar 26 00:56:55.466: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-2896 Priority: 0 Node: latest-worker/ Labels: baz=blah controller-revision-hash=ss-84f8fd7c56 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: IPs: Controlled By: StatefulSet/ss Containers: webserver: Image: docker.io/library/httpd:2.4.38-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-jwhtn (ro) Volumes: default-token-jwhtn: Type: Secret (a volume populated by a Secret) SecretName: default-token-jwhtn Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m3s kubelet, latest-worker Predicate PodFitsHostPorts failed Mar 26 00:56:55.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-2896 --tail=100' Mar 26 00:56:55.587: INFO: rc: 1 Mar 26 00:56:55.587: INFO: Last 100 log lines of ss-0: Mar 26 00:56:55.587: INFO: Deleting all statefulset in ns statefulset-2896 Mar 26 00:56:55.590: INFO: Scaling statefulset ss to 0 Mar 26 00:57:05.622: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 00:57:05.625: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 STEP: Collecting events from namespace "statefulset-2896". STEP: Found 14 events. Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:45 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:45 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:45 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-2896/ss is recreating failed Pod ss-0 Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:45 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:45 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:45 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:45 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:46 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:46 +0000 UTC - event for test-pod: {kubelet latest-worker} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:47 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:47 +0000 UTC - event for test-pod: {kubelet latest-worker} Created: Created container webserver Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:47 +0000 UTC - event for test-pod: {kubelet latest-worker} Started: Started container webserver Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:52 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 26 00:57:05.642: INFO: At 2020-03-26 00:51:52 +0000 UTC - event for test-pod: {kubelet latest-worker} Killing: Stopping container webserver Mar 26 00:57:05.644: INFO: POD NODE PHASE GRACE CONDITIONS Mar 26 00:57:05.644: INFO: Mar 26 00:57:05.647: INFO: Logging node info for node latest-control-plane Mar 26 00:57:05.649: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane 6f844e63-ec06-4ae6-b2e5-2db982693de5 2825103 0 2020-03-15 18:27:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-26 00:54:33 +0000 UTC,LastTransitionTime:2020-03-15 18:27:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-26 00:54:33 +0000 UTC,LastTransitionTime:2020-03-15 18:27:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-26 00:54:33 +0000 UTC,LastTransitionTime:2020-03-15 18:27:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-26 00:54:33 +0000 UTC,LastTransitionTime:2020-03-15 18:28:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.11,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fd1b5d260b433d8f617f455164eb5a,SystemUUID:611bedf3-8581-4e6e-a43b-01a437bb59ad,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 26 00:57:05.650: INFO: Logging kubelet events for node latest-control-plane Mar 26 00:57:05.652: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 26 00:57:05.671: INFO: kube-apiserver-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container kube-apiserver ready: true, restart count 0 Mar 26 00:57:05.671: INFO: kube-controller-manager-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 26 00:57:05.671: INFO: kube-scheduler-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container kube-scheduler ready: true, restart count 1 Mar 26 00:57:05.671: INFO: coredns-6955765f44-lq4t7 started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container coredns ready: true, restart count 0 Mar 26 00:57:05.671: INFO: coredns-6955765f44-f7wtl started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container coredns ready: true, restart count 0 Mar 26 00:57:05.671: INFO: etcd-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container etcd ready: true, restart count 0 Mar 26 00:57:05.671: INFO: kube-proxy-jpqvf started at 2020-03-15 18:27:50 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container kube-proxy ready: true, restart count 0 Mar 26 00:57:05.671: INFO: kindnet-sx5s7 started at 2020-03-15 18:27:50 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 00:57:05.671: INFO: local-path-provisioner-7745554f7f-fmsmz started at 2020-03-15 18:28:06 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.671: INFO: Container local-path-provisioner ready: true, restart count 0 W0326 00:57:05.675193 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 00:57:05.763: INFO: Latency metrics for node latest-control-plane Mar 26 00:57:05.763: INFO: Logging node info for node latest-worker Mar 26 00:57:05.766: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 98bcda58-a897-4edf-8857-b99f8c93a9dc 2825173 0 2020-03-15 18:28:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-26 00:54:57 +0000 UTC,LastTransitionTime:2020-03-15 18:28:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-26 00:54:57 +0000 UTC,LastTransitionTime:2020-03-15 18:28:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-26 00:54:57 +0000 UTC,LastTransitionTime:2020-03-15 18:28:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-26 00:54:57 +0000 UTC,LastTransitionTime:2020-03-15 18:28:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.13,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ded315e8ce8e461b8f5fb393e0d16a78,SystemUUID:e785bdde-e4ba-4979-bd97-238cd0b6bc89,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9 docker.io/aquasec/kube-bench:latest],SizeBytes:8028777,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135 docker.io/library/busybox:latest],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 26 00:57:05.766: INFO: Logging kubelet events for node latest-worker Mar 26 00:57:05.769: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 26 00:57:05.774: INFO: kindnet-vnjgh started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.774: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 00:57:05.774: INFO: kube-proxy-s9v6p started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.774: INFO: Container kube-proxy ready: true, restart count 0 W0326 00:57:05.778413 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 00:57:05.826: INFO: Latency metrics for node latest-worker Mar 26 00:57:05.826: INFO: Logging node info for node latest-worker2 Mar 26 00:57:05.830: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 9565903b-7ffe-4e7a-aa51-04476604a6d3 2825491 0 2020-03-15 18:28:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-26 00:56:48 +0000 UTC,LastTransitionTime:2020-03-15 18:28:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-26 00:56:48 +0000 UTC,LastTransitionTime:2020-03-15 18:28:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-26 00:56:48 +0000 UTC,LastTransitionTime:2020-03-15 18:28:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-26 00:56:48 +0000 UTC,LastTransitionTime:2020-03-15 18:28:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.12,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ebeddb5d9794194b18fe17773f1735f,SystemUUID:bf79d085-e343-4740-b85c-023bec44e003,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:563f44851d413c7199a0a8a2a13df1e98bee48229e19f4917e6da68e5482df6e docker.io/aquasec/kube-hunter:latest],SizeBytes:123995068,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9 docker.io/aquasec/kube-bench:latest],SizeBytes:8028777,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135 docker.io/library/busybox:latest],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 26 00:57:05.831: INFO: Logging kubelet events for node latest-worker2 Mar 26 00:57:05.833: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 26 00:57:05.854: INFO: kindnet-zq6gp started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.854: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 00:57:05.854: INFO: kube-proxy-c5xlk started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Mar 26 00:57:05.854: INFO: Container kube-proxy ready: true, restart count 0 W0326 00:57:05.858522 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 00:57:05.901: INFO: Latency metrics for node latest-worker2 Mar 26 00:57:05.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2896" for this suite. • Failure [320.891 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 00:56:52.824: Timed out after 300.000s. Expected <*errors.errorString | 0xc004e5e960>: { s: "pod ss-0 is not in running phase: Pending", } to be nil /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":271,"skipped":4614,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:57:05.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 26 00:57:10.534: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b40f8784-c919-4636-a023-53671a32703c" Mar 26 00:57:10.534: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b40f8784-c919-4636-a023-53671a32703c" in namespace "pods-6383" to be "terminated due to deadline exceeded" Mar 26 00:57:10.542: INFO: Pod "pod-update-activedeadlineseconds-b40f8784-c919-4636-a023-53671a32703c": Phase="Running", Reason="", readiness=true. Elapsed: 7.712713ms Mar 26 00:57:12.545: INFO: Pod "pod-update-activedeadlineseconds-b40f8784-c919-4636-a023-53671a32703c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011481786s Mar 26 00:57:12.546: INFO: Pod "pod-update-activedeadlineseconds-b40f8784-c919-4636-a023-53671a32703c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:57:12.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6383" for this suite. • [SLOW TEST:6.636 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4660,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:57:12.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5191 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 26 00:57:12.667: INFO: Found 0 stateful pods, waiting for 3 Mar 26 00:57:22.671: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:57:22.671: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:57:22.671: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 26 00:57:22.698: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 26 00:57:32.732: INFO: Updating stateful set ss2 Mar 26 00:57:32.738: INFO: Waiting for Pod statefulset-5191/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 26 00:57:42.887: INFO: Found 2 stateful pods, waiting for 3 Mar 26 00:57:52.891: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:57:52.891: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 26 00:57:52.891: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 26 00:57:52.915: INFO: Updating stateful set ss2 Mar 26 00:57:52.968: INFO: Waiting for Pod statefulset-5191/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 26 00:58:02.993: INFO: Updating stateful set ss2 Mar 26 00:58:03.028: INFO: Waiting for StatefulSet statefulset-5191/ss2 to complete update Mar 26 00:58:03.028: INFO: Waiting for Pod statefulset-5191/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 26 00:58:13.037: INFO: Deleting all statefulset in ns statefulset-5191 Mar 26 00:58:13.040: INFO: Scaling statefulset ss2 to 0 Mar 26 00:58:23.073: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 00:58:23.076: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:58:23.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5191" for this suite. • [SLOW TEST:70.544 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":273,"skipped":4672,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 00:58:23.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 26 00:58:27.682: INFO: Successfully updated pod "labelsupdate132045a2-11a3-432e-a39c-4c5bf02650e0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 00:58:29.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-496" for this suite. • [SLOW TEST:6.643 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4713,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSMar 26 00:58:29.744: INFO: Running AfterSuite actions on all nodes Mar 26 00:58:29.744: INFO: Running AfterSuite actions on node 1 Mar 26 00:58:29.744: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":274,"skipped":4717,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782 Ran 275 of 4992 Specs in 4909.581 seconds FAIL! -- 274 Passed | 1 Failed | 0 Pending | 4717 Skipped --- FAIL: TestE2E (4909.66s) FAIL