I0626 21:09:12.964315 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0626 21:09:12.964558 6 e2e.go:109] Starting e2e run "85baae3e-d6b4-4bc8-aba9-6d8fb2bb58ab" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593205751 - Will randomize all specs Will run 278 of 4842 specs Jun 26 21:09:13.026: INFO: >>> kubeConfig: /root/.kube/config Jun 26 21:09:13.030: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 26 21:09:13.048: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 26 21:09:13.077: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 26 21:09:13.078: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 26 21:09:13.078: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 26 21:09:13.106: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 26 21:09:13.106: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 26 21:09:13.106: INFO: e2e test version: v1.17.4 Jun 26 21:09:13.108: INFO: kube-apiserver version: v1.17.2 Jun 26 21:09:13.108: INFO: >>> kubeConfig: /root/.kube/config Jun 26 21:09:13.115: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:13.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test Jun 26 21:09:13.175: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:13.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1138" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:13.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-23d95df0-2f27-4769-a4ea-25aa0bb3b0d1 STEP: Creating a pod to test consume secrets Jun 26 21:09:13.368: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a" in namespace "projected-1138" to be "success or failure" Jun 26 21:09:13.373: INFO: Pod "pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.629183ms Jun 26 21:09:15.377: INFO: Pod "pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008536867s Jun 26 21:09:17.530: INFO: Pod "pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16172666s STEP: Saw pod success Jun 26 21:09:17.530: INFO: Pod "pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a" satisfied condition "success or failure" Jun 26 21:09:17.533: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a container projected-secret-volume-test: STEP: delete the pod Jun 26 21:09:17.590: INFO: Waiting for pod pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a to disappear Jun 26 21:09:17.607: INFO: Pod pod-projected-secrets-3680ebaa-9545-4ab1-8f27-8227c05db60a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:17.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1138" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":47,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:17.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jun 26 21:09:17.686: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix291780784/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:17.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-295" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":3,"skipped":48,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:17.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 26 21:09:17.867: INFO: Waiting up to 5m0s for pod "pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1" in namespace "emptydir-3516" to be "success or failure" Jun 26 21:09:17.914: INFO: Pod "pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 46.789221ms Jun 26 21:09:19.918: INFO: Pod "pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051046797s Jun 26 21:09:21.938: INFO: Pod "pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070791974s STEP: Saw pod success Jun 26 21:09:21.938: INFO: Pod "pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1" satisfied condition "success or failure" Jun 26 21:09:21.940: INFO: Trying to get logs from node jerma-worker pod pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1 container test-container: STEP: delete the pod Jun 26 21:09:21.958: INFO: Waiting for pod pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1 to disappear Jun 26 21:09:21.962: INFO: Pod pod-4dbf3924-be5d-4b07-bf20-6524efdbd6b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:21.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3516" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:21.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:09:22.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893" in namespace "projected-5220" to be "success or failure" Jun 26 21:09:22.159: INFO: Pod "downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893": Phase="Pending", Reason="", readiness=false. Elapsed: 20.004594ms Jun 26 21:09:24.163: INFO: Pod "downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023966698s Jun 26 21:09:26.167: INFO: Pod "downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028341059s STEP: Saw pod success Jun 26 21:09:26.167: INFO: Pod "downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893" satisfied condition "success or failure" Jun 26 21:09:26.170: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893 container client-container: STEP: delete the pod Jun 26 21:09:26.210: INFO: Waiting for pod downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893 to disappear Jun 26 21:09:26.226: INFO: Pod downwardapi-volume-53ef88c2-9218-443a-bfce-21b72391a893 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:26.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5220" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":85,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:26.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:09:26.318: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:27.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4543" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":6,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:27.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 26 21:09:27.588: INFO: Waiting up to 5m0s for pod "pod-e2a970a0-7a57-49c5-bfac-94b95e070553" in namespace "emptydir-1103" to be "success or failure" Jun 26 21:09:27.591: INFO: Pod "pod-e2a970a0-7a57-49c5-bfac-94b95e070553": Phase="Pending", Reason="", readiness=false. Elapsed: 3.655836ms Jun 26 21:09:29.596: INFO: Pod "pod-e2a970a0-7a57-49c5-bfac-94b95e070553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008243876s Jun 26 21:09:31.599: INFO: Pod "pod-e2a970a0-7a57-49c5-bfac-94b95e070553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011623819s STEP: Saw pod success Jun 26 21:09:31.599: INFO: Pod "pod-e2a970a0-7a57-49c5-bfac-94b95e070553" satisfied condition "success or failure" Jun 26 21:09:31.602: INFO: Trying to get logs from node jerma-worker2 pod pod-e2a970a0-7a57-49c5-bfac-94b95e070553 container test-container: STEP: delete the pod Jun 26 21:09:31.676: INFO: Waiting for pod pod-e2a970a0-7a57-49c5-bfac-94b95e070553 to disappear Jun 26 21:09:31.698: INFO: Pod pod-e2a970a0-7a57-49c5-bfac-94b95e070553 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:31.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1103" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:31.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9769 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9769 STEP: Creating statefulset with conflicting port in namespace statefulset-9769 STEP: Waiting until pod test-pod will start running in namespace statefulset-9769 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9769 Jun 26 21:09:37.882: INFO: Observed stateful pod in namespace: statefulset-9769, name: ss-0, uid: 0149f680-285a-4016-9099-6d7655e1b932, status phase: Pending. Waiting for statefulset controller to delete. Jun 26 21:09:38.422: INFO: Observed stateful pod in namespace: statefulset-9769, name: ss-0, uid: 0149f680-285a-4016-9099-6d7655e1b932, status phase: Failed. Waiting for statefulset controller to delete. Jun 26 21:09:38.445: INFO: Observed stateful pod in namespace: statefulset-9769, name: ss-0, uid: 0149f680-285a-4016-9099-6d7655e1b932, status phase: Failed. Waiting for statefulset controller to delete. Jun 26 21:09:38.462: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9769 STEP: Removing pod with conflicting port in namespace statefulset-9769 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9769 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 26 21:09:42.554: INFO: Deleting all statefulset in ns statefulset-9769 Jun 26 21:09:42.557: INFO: Scaling statefulset ss to 0 Jun 26 21:09:52.619: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 21:09:52.622: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:09:52.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9769" for this suite. • [SLOW TEST:20.932 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":8,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:09:52.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3591 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 21:09:52.686: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 26 21:10:16.824: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:8080/dial?request=hostname&protocol=http&host=10.244.1.16&port=8080&tries=1'] Namespace:pod-network-test-3591 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 21:10:16.824: INFO: >>> kubeConfig: /root/.kube/config I0626 21:10:16.858640 6 log.go:172] (0xc0028acd10) (0xc001ef63c0) Create stream I0626 21:10:16.858677 6 log.go:172] (0xc0028acd10) (0xc001ef63c0) Stream added, broadcasting: 1 I0626 21:10:16.861691 6 log.go:172] (0xc0028acd10) Reply frame received for 1 I0626 21:10:16.861734 6 log.go:172] (0xc0028acd10) (0xc001e2e0a0) Create stream I0626 21:10:16.861748 6 log.go:172] (0xc0028acd10) (0xc001e2e0a0) Stream added, broadcasting: 3 I0626 21:10:16.862717 6 log.go:172] (0xc0028acd10) Reply frame received for 3 I0626 21:10:16.862771 6 log.go:172] (0xc0028acd10) (0xc001e2e1e0) Create stream I0626 21:10:16.862795 6 log.go:172] (0xc0028acd10) (0xc001e2e1e0) Stream added, broadcasting: 5 I0626 21:10:16.863576 6 log.go:172] (0xc0028acd10) Reply frame received for 5 I0626 21:10:17.063833 6 log.go:172] (0xc0028acd10) Data frame received for 3 I0626 21:10:17.063872 6 log.go:172] (0xc001e2e0a0) (3) Data frame handling I0626 21:10:17.063939 6 log.go:172] (0xc001e2e0a0) (3) Data frame sent I0626 21:10:17.064667 6 log.go:172] (0xc0028acd10) Data frame received for 3 I0626 21:10:17.064684 6 log.go:172] (0xc001e2e0a0) (3) Data frame handling I0626 21:10:17.064905 6 log.go:172] (0xc0028acd10) Data frame received for 5 I0626 21:10:17.064922 6 log.go:172] (0xc001e2e1e0) (5) Data frame handling I0626 21:10:17.067381 6 log.go:172] (0xc0028acd10) Data frame received for 1 I0626 21:10:17.067419 6 log.go:172] (0xc001ef63c0) (1) Data frame handling I0626 21:10:17.067440 6 log.go:172] (0xc001ef63c0) (1) Data frame sent I0626 21:10:17.067453 6 log.go:172] (0xc0028acd10) (0xc001ef63c0) Stream removed, broadcasting: 1 I0626 21:10:17.067543 6 log.go:172] (0xc0028acd10) Go away received I0626 21:10:17.067859 6 log.go:172] (0xc0028acd10) (0xc001ef63c0) Stream removed, broadcasting: 1 I0626 21:10:17.067873 6 log.go:172] (0xc0028acd10) (0xc001e2e0a0) Stream removed, broadcasting: 3 I0626 21:10:17.067878 6 log.go:172] (0xc0028acd10) (0xc001e2e1e0) Stream removed, broadcasting: 5 Jun 26 21:10:17.067: INFO: Waiting for responses: map[] Jun 26 21:10:17.072: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:8080/dial?request=hostname&protocol=http&host=10.244.2.108&port=8080&tries=1'] Namespace:pod-network-test-3591 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 21:10:17.072: INFO: >>> kubeConfig: /root/.kube/config I0626 21:10:17.104583 6 log.go:172] (0xc0031082c0) (0xc001e2e460) Create stream I0626 21:10:17.104611 6 log.go:172] (0xc0031082c0) (0xc001e2e460) Stream added, broadcasting: 1 I0626 21:10:17.107420 6 log.go:172] (0xc0031082c0) Reply frame received for 1 I0626 21:10:17.107447 6 log.go:172] (0xc0031082c0) (0xc002257a40) Create stream I0626 21:10:17.107456 6 log.go:172] (0xc0031082c0) (0xc002257a40) Stream added, broadcasting: 3 I0626 21:10:17.108325 6 log.go:172] (0xc0031082c0) Reply frame received for 3 I0626 21:10:17.108365 6 log.go:172] (0xc0031082c0) (0xc001ef6500) Create stream I0626 21:10:17.108378 6 log.go:172] (0xc0031082c0) (0xc001ef6500) Stream added, broadcasting: 5 I0626 21:10:17.109323 6 log.go:172] (0xc0031082c0) Reply frame received for 5 I0626 21:10:17.179510 6 log.go:172] (0xc0031082c0) Data frame received for 3 I0626 21:10:17.179536 6 log.go:172] (0xc002257a40) (3) Data frame handling I0626 21:10:17.179547 6 log.go:172] (0xc002257a40) (3) Data frame sent I0626 21:10:17.180042 6 log.go:172] (0xc0031082c0) Data frame received for 5 I0626 21:10:17.180062 6 log.go:172] (0xc001ef6500) (5) Data frame handling I0626 21:10:17.180227 6 log.go:172] (0xc0031082c0) Data frame received for 3 I0626 21:10:17.180245 6 log.go:172] (0xc002257a40) (3) Data frame handling I0626 21:10:17.182014 6 log.go:172] (0xc0031082c0) Data frame received for 1 I0626 21:10:17.182061 6 log.go:172] (0xc001e2e460) (1) Data frame handling I0626 21:10:17.182083 6 log.go:172] (0xc001e2e460) (1) Data frame sent I0626 21:10:17.182093 6 log.go:172] (0xc0031082c0) (0xc001e2e460) Stream removed, broadcasting: 1 I0626 21:10:17.182115 6 log.go:172] (0xc0031082c0) Go away received I0626 21:10:17.182223 6 log.go:172] (0xc0031082c0) (0xc001e2e460) Stream removed, broadcasting: 1 I0626 21:10:17.182248 6 log.go:172] (0xc0031082c0) (0xc002257a40) Stream removed, broadcasting: 3 I0626 21:10:17.182267 6 log.go:172] (0xc0031082c0) (0xc001ef6500) Stream removed, broadcasting: 5 Jun 26 21:10:17.182: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:10:17.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3591" for this suite. • [SLOW TEST:24.577 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:10:17.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 26 21:10:21.379: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 26 21:10:26.491: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:10:26.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6419" for this suite. • [SLOW TEST:9.284 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":10,"skipped":241,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:10:26.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a3823b94-1b77-4a9f-afe3-ba44aaf28a97 STEP: Creating a pod to test consume configMaps Jun 26 21:10:26.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40" in namespace "configmap-7790" to be "success or failure" Jun 26 21:10:26.681: INFO: Pod "pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40": Phase="Pending", Reason="", readiness=false. Elapsed: 30.851677ms Jun 26 21:10:28.685: INFO: Pod "pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035005983s Jun 26 21:10:30.690: INFO: Pod "pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039243589s STEP: Saw pod success Jun 26 21:10:30.690: INFO: Pod "pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40" satisfied condition "success or failure" Jun 26 21:10:30.693: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40 container configmap-volume-test: STEP: delete the pod Jun 26 21:10:30.849: INFO: Waiting for pod pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40 to disappear Jun 26 21:10:30.886: INFO: Pod pod-configmaps-16f5bd51-fcb6-4645-9497-30c16775fe40 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:10:30.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7790" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":241,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:10:30.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:10:31.713: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:10:33.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728802631, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728802631, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728802631, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728802631, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:10:36.823: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:10:36.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1029" for this suite. STEP: Destroying namespace "webhook-1029-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.157 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":12,"skipped":249,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:10:37.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-s58rz in namespace proxy-9553 I0626 21:10:37.209491 6 runners.go:189] Created replication controller with name: proxy-service-s58rz, namespace: proxy-9553, replica count: 1 I0626 21:10:38.259925 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:10:39.260122 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:10:40.260373 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:10:41.260603 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:42.260874 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:43.261067 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:44.261335 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:45.261611 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:46.261890 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:47.262161 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:48.262428 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:49.262682 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0626 21:10:50.262927 6 runners.go:189] proxy-service-s58rz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 21:10:50.266: INFO: setup took 13.147408816s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 26 21:10:50.273: INFO: (0) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 6.999835ms) Jun 26 21:10:50.273: INFO: (0) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 7.082225ms) Jun 26 21:10:50.273: INFO: (0) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 7.097522ms) Jun 26 21:10:50.275: INFO: (0) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 8.434595ms) Jun 26 21:10:50.275: INFO: (0) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 8.947689ms) Jun 26 21:10:50.275: INFO: (0) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 8.986344ms) Jun 26 21:10:50.275: INFO: (0) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 9.102749ms) Jun 26 21:10:50.276: INFO: (0) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 9.913021ms) Jun 26 21:10:50.281: INFO: (0) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 14.656278ms) Jun 26 21:10:50.281: INFO: (0) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 14.965552ms) Jun 26 21:10:50.283: INFO: (0) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 16.455017ms) Jun 26 21:10:50.291: INFO: (0) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 24.708914ms) Jun 26 21:10:50.291: INFO: (0) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 24.680915ms) Jun 26 21:10:50.291: INFO: (0) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test (200; 4.437384ms) Jun 26 21:10:50.296: INFO: (1) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 4.375761ms) Jun 26 21:10:50.296: INFO: (1) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 4.44435ms) Jun 26 21:10:50.296: INFO: (1) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 4.494214ms) Jun 26 21:10:50.296: INFO: (1) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 4.496145ms) Jun 26 21:10:50.296: INFO: (1) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 4.467087ms) Jun 26 21:10:50.296: INFO: (1) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 4.94112ms) Jun 26 21:10:50.297: INFO: (1) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 5.490009ms) Jun 26 21:10:50.297: INFO: (1) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 5.806469ms) Jun 26 21:10:50.297: INFO: (1) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 5.833059ms) Jun 26 21:10:50.297: INFO: (1) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 5.825414ms) Jun 26 21:10:50.297: INFO: (1) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 5.832651ms) Jun 26 21:10:50.300: INFO: (2) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 2.707041ms) Jun 26 21:10:50.300: INFO: (2) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.154203ms) Jun 26 21:10:50.300: INFO: (2) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.269724ms) Jun 26 21:10:50.302: INFO: (2) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 4.263029ms) Jun 26 21:10:50.302: INFO: (2) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 4.563168ms) Jun 26 21:10:50.302: INFO: (2) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 4.556318ms) Jun 26 21:10:50.302: INFO: (2) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 4.606948ms) Jun 26 21:10:50.302: INFO: (2) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 4.916738ms) Jun 26 21:10:50.302: INFO: (2) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 4.996196ms) Jun 26 21:10:50.303: INFO: (2) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 5.231874ms) Jun 26 21:10:50.303: INFO: (2) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 5.336777ms) Jun 26 21:10:50.303: INFO: (2) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test (200; 5.333204ms) Jun 26 21:10:50.304: INFO: (2) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 6.388974ms) Jun 26 21:10:50.304: INFO: (2) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 6.324925ms) Jun 26 21:10:50.304: INFO: (2) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 6.46363ms) Jun 26 21:10:50.308: INFO: (3) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 4.169457ms) Jun 26 21:10:50.308: INFO: (3) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 4.320242ms) Jun 26 21:10:50.308: INFO: (3) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 4.568896ms) Jun 26 21:10:50.308: INFO: (3) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 4.813206ms) Jun 26 21:10:50.309: INFO: (3) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 4.996788ms) Jun 26 21:10:50.309: INFO: (3) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 5.409711ms) Jun 26 21:10:50.309: INFO: (3) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.601941ms) Jun 26 21:10:50.309: INFO: (3) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 5.663547ms) Jun 26 21:10:50.310: INFO: (3) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 6.127883ms) Jun 26 21:10:50.310: INFO: (3) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 6.239187ms) Jun 26 21:10:50.310: INFO: (3) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 6.224361ms) Jun 26 21:10:50.310: INFO: (3) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 6.328785ms) Jun 26 21:10:50.310: INFO: (3) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 6.537734ms) Jun 26 21:10:50.318: INFO: (4) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 7.513279ms) Jun 26 21:10:50.318: INFO: (4) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 7.791009ms) Jun 26 21:10:50.318: INFO: (4) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 7.959016ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 11.12117ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 11.25118ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 11.287496ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 11.329429ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 11.315442ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 11.292101ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 11.417507ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 11.454179ms) Jun 26 21:10:50.322: INFO: (4) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 11.668999ms) Jun 26 21:10:50.323: INFO: (4) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 12.142023ms) Jun 26 21:10:50.326: INFO: (5) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.536346ms) Jun 26 21:10:50.326: INFO: (5) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.435815ms) Jun 26 21:10:50.328: INFO: (5) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.702465ms) Jun 26 21:10:50.328: INFO: (5) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.651919ms) Jun 26 21:10:50.328: INFO: (5) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 5.682451ms) Jun 26 21:10:50.328: INFO: (5) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 5.763126ms) Jun 26 21:10:50.329: INFO: (5) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 5.85008ms) Jun 26 21:10:50.329: INFO: (5) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 6.110142ms) Jun 26 21:10:50.330: INFO: (5) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 7.075961ms) Jun 26 21:10:50.330: INFO: (5) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 7.481864ms) Jun 26 21:10:50.330: INFO: (5) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 7.50172ms) Jun 26 21:10:50.330: INFO: (5) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 7.496881ms) Jun 26 21:10:50.330: INFO: (5) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 7.6551ms) Jun 26 21:10:50.330: INFO: (5) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 7.628045ms) Jun 26 21:10:50.330: INFO: (5) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 7.585744ms) Jun 26 21:10:50.334: INFO: (6) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.112177ms) Jun 26 21:10:50.334: INFO: (6) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.272588ms) Jun 26 21:10:50.334: INFO: (6) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test (200; 4.272422ms) Jun 26 21:10:50.335: INFO: (6) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 4.475041ms) Jun 26 21:10:50.335: INFO: (6) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 4.776289ms) Jun 26 21:10:50.335: INFO: (6) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 4.916968ms) Jun 26 21:10:50.335: INFO: (6) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 5.067101ms) Jun 26 21:10:50.336: INFO: (6) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 5.318482ms) Jun 26 21:10:50.336: INFO: (6) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 5.674607ms) Jun 26 21:10:50.336: INFO: (6) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 5.866964ms) Jun 26 21:10:50.336: INFO: (6) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 5.960129ms) Jun 26 21:10:50.336: INFO: (6) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 6.092164ms) Jun 26 21:10:50.336: INFO: (6) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 6.046374ms) Jun 26 21:10:50.337: INFO: (6) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 6.518208ms) Jun 26 21:10:50.337: INFO: (6) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 6.655248ms) Jun 26 21:10:50.340: INFO: (7) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 2.768279ms) Jun 26 21:10:50.340: INFO: (7) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 2.654479ms) Jun 26 21:10:50.340: INFO: (7) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test (200; 2.900149ms) Jun 26 21:10:50.340: INFO: (7) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 2.988444ms) Jun 26 21:10:50.358: INFO: (7) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 20.869687ms) Jun 26 21:10:50.358: INFO: (7) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 21.06643ms) Jun 26 21:10:50.358: INFO: (7) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 21.01627ms) Jun 26 21:10:50.359: INFO: (7) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 21.360836ms) Jun 26 21:10:50.359: INFO: (7) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 21.654467ms) Jun 26 21:10:50.359: INFO: (7) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 21.594773ms) Jun 26 21:10:50.360: INFO: (7) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 22.694834ms) Jun 26 21:10:50.360: INFO: (7) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 22.902009ms) Jun 26 21:10:50.360: INFO: (7) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 23.11021ms) Jun 26 21:10:50.360: INFO: (7) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 23.16611ms) Jun 26 21:10:50.361: INFO: (7) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 23.287576ms) Jun 26 21:10:50.365: INFO: (8) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 4.722419ms) Jun 26 21:10:50.367: INFO: (8) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 6.591027ms) Jun 26 21:10:50.367: INFO: (8) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 6.48437ms) Jun 26 21:10:50.368: INFO: (8) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 7.018975ms) Jun 26 21:10:50.368: INFO: (8) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: ... (200; 6.968905ms) Jun 26 21:10:50.368: INFO: (8) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 7.056487ms) Jun 26 21:10:50.368: INFO: (8) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 7.121747ms) Jun 26 21:10:50.368: INFO: (8) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 7.062761ms) Jun 26 21:10:50.368: INFO: (8) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 7.10761ms) Jun 26 21:10:50.368: INFO: (8) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 7.657947ms) Jun 26 21:10:50.369: INFO: (8) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 8.12399ms) Jun 26 21:10:50.369: INFO: (8) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 8.542013ms) Jun 26 21:10:50.369: INFO: (8) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 8.42595ms) Jun 26 21:10:50.369: INFO: (8) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 8.549396ms) Jun 26 21:10:50.369: INFO: (8) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 8.545972ms) Jun 26 21:10:50.372: INFO: (9) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 2.158139ms) Jun 26 21:10:50.372: INFO: (9) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 2.256161ms) Jun 26 21:10:50.374: INFO: (9) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 4.195782ms) Jun 26 21:10:50.374: INFO: (9) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 5.075773ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 5.507197ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 5.577419ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.620948ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 5.625768ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 5.637172ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 5.706378ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 5.654372ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 5.651741ms) Jun 26 21:10:50.375: INFO: (9) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 5.718505ms) Jun 26 21:10:50.376: INFO: (9) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 6.671399ms) Jun 26 21:10:50.376: INFO: (9) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 6.703726ms) Jun 26 21:10:50.376: INFO: (9) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: ... (200; 5.274785ms) Jun 26 21:10:50.382: INFO: (10) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 5.443765ms) Jun 26 21:10:50.382: INFO: (10) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.4437ms) Jun 26 21:10:50.382: INFO: (10) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 5.45934ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 6.132178ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 6.257508ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 6.331214ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 6.360788ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 6.426158ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 6.556177ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 6.479563ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 6.641279ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 6.608073ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 6.700706ms) Jun 26 21:10:50.383: INFO: (10) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 6.938159ms) Jun 26 21:10:50.387: INFO: (11) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.371547ms) Jun 26 21:10:50.387: INFO: (11) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.505705ms) Jun 26 21:10:50.387: INFO: (11) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 3.595602ms) Jun 26 21:10:50.387: INFO: (11) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 3.549183ms) Jun 26 21:10:50.387: INFO: (11) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 3.796641ms) Jun 26 21:10:50.387: INFO: (11) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 4.680761ms) Jun 26 21:10:50.388: INFO: (11) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 4.672251ms) Jun 26 21:10:50.388: INFO: (11) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 4.761533ms) Jun 26 21:10:50.388: INFO: (11) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 4.776421ms) Jun 26 21:10:50.388: INFO: (11) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 4.770694ms) Jun 26 21:10:50.392: INFO: (12) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.562659ms) Jun 26 21:10:50.392: INFO: (12) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.543909ms) Jun 26 21:10:50.392: INFO: (12) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 4.976061ms) Jun 26 21:10:50.393: INFO: (12) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 4.921751ms) Jun 26 21:10:50.393: INFO: (12) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 4.942044ms) Jun 26 21:10:50.393: INFO: (12) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 4.954331ms) Jun 26 21:10:50.393: INFO: (12) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.105389ms) Jun 26 21:10:50.394: INFO: (12) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 5.161007ms) Jun 26 21:10:50.394: INFO: (12) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 5.327857ms) Jun 26 21:10:50.394: INFO: (12) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 5.436875ms) Jun 26 21:10:50.394: INFO: (12) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 5.496352ms) Jun 26 21:10:50.397: INFO: (13) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.183957ms) Jun 26 21:10:50.397: INFO: (13) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.136562ms) Jun 26 21:10:50.398: INFO: (13) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 3.79513ms) Jun 26 21:10:50.398: INFO: (13) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 3.911676ms) Jun 26 21:10:50.398: INFO: (13) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 3.994331ms) Jun 26 21:10:50.398: INFO: (13) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.990942ms) Jun 26 21:10:50.398: INFO: (13) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 4.132856ms) Jun 26 21:10:50.398: INFO: (13) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 4.255386ms) Jun 26 21:10:50.399: INFO: (13) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 4.683257ms) Jun 26 21:10:50.399: INFO: (13) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 4.788449ms) Jun 26 21:10:50.399: INFO: (13) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 4.963812ms) Jun 26 21:10:50.399: INFO: (13) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 4.933661ms) Jun 26 21:10:50.399: INFO: (13) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 4.914293ms) Jun 26 21:10:50.399: INFO: (13) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 4.881322ms) Jun 26 21:10:50.399: INFO: (13) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 4.97585ms) Jun 26 21:10:50.400: INFO: (13) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 5.107764ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 5.151207ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.143298ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 5.192054ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 5.174618ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.169425ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 5.244514ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 5.166306ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 5.208487ms) Jun 26 21:10:50.405: INFO: (14) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 5.503423ms) Jun 26 21:10:50.406: INFO: (14) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 5.780178ms) Jun 26 21:10:50.406: INFO: (14) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 5.785753ms) Jun 26 21:10:50.408: INFO: (15) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 2.687799ms) Jun 26 21:10:50.408: INFO: (15) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 2.662157ms) Jun 26 21:10:50.409: INFO: (15) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 4.903938ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 9.450656ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 9.452725ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 9.505478ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 9.627478ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 9.536502ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 9.542804ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 9.557411ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 9.631103ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 9.633027ms) Jun 26 21:10:50.415: INFO: (15) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 9.596176ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 6.014297ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 6.192915ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 6.197006ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test (200; 6.214968ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 6.348571ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 6.287211ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 6.750283ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 6.743923ms) Jun 26 21:10:50.422: INFO: (16) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 6.739336ms) Jun 26 21:10:50.424: INFO: (16) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 8.026063ms) Jun 26 21:10:50.424: INFO: (16) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 8.078277ms) Jun 26 21:10:50.424: INFO: (16) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 7.985861ms) Jun 26 21:10:50.424: INFO: (16) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 8.111457ms) Jun 26 21:10:50.424: INFO: (16) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 8.151029ms) Jun 26 21:10:50.424: INFO: (16) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 8.027184ms) Jun 26 21:10:50.427: INFO: (17) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 3.136484ms) Jun 26 21:10:50.427: INFO: (17) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.46924ms) Jun 26 21:10:50.428: INFO: (17) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 3.567415ms) Jun 26 21:10:50.428: INFO: (17) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 3.591205ms) Jun 26 21:10:50.428: INFO: (17) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 3.685357ms) Jun 26 21:10:50.428: INFO: (17) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.631731ms) Jun 26 21:10:50.428: INFO: (17) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.64369ms) Jun 26 21:10:50.428: INFO: (17) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 3.649948ms) Jun 26 21:10:50.428: INFO: (17) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test (200; 4.682247ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 4.702467ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 4.868934ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 4.89434ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 4.944601ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:1080/proxy/: test<... (200; 4.989593ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 4.937203ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.00352ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 4.998065ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 5.051609ms) Jun 26 21:10:50.433: INFO: (18) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 5.176054ms) Jun 26 21:10:50.437: INFO: (19) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:460/proxy/: tls baz (200; 3.377234ms) Jun 26 21:10:50.437: INFO: (19) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:1080/proxy/: ... (200; 3.70024ms) Jun 26 21:10:50.437: INFO: (19) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r/proxy/: test (200; 3.64577ms) Jun 26 21:10:50.437: INFO: (19) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:162/proxy/: bar (200; 3.699865ms) Jun 26 21:10:50.437: INFO: (19) /api/v1/namespaces/proxy-9553/pods/proxy-service-s58rz-65j6r:160/proxy/: foo (200; 3.985435ms) Jun 26 21:10:50.437: INFO: (19) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:462/proxy/: tls qux (200; 3.927107ms) Jun 26 21:10:50.439: INFO: (19) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:162/proxy/: bar (200; 5.344119ms) Jun 26 21:10:50.439: INFO: (19) /api/v1/namespaces/proxy-9553/pods/https:proxy-service-s58rz-65j6r:443/proxy/: test<... (200; 5.326248ms) Jun 26 21:10:50.439: INFO: (19) /api/v1/namespaces/proxy-9553/pods/http:proxy-service-s58rz-65j6r:160/proxy/: foo (200; 5.826027ms) Jun 26 21:10:50.440: INFO: (19) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname2/proxy/: bar (200; 6.320452ms) Jun 26 21:10:50.440: INFO: (19) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname2/proxy/: bar (200; 6.62249ms) Jun 26 21:10:50.440: INFO: (19) /api/v1/namespaces/proxy-9553/services/proxy-service-s58rz:portname1/proxy/: foo (200; 6.553587ms) Jun 26 21:10:50.440: INFO: (19) /api/v1/namespaces/proxy-9553/services/http:proxy-service-s58rz:portname1/proxy/: foo (200; 6.600225ms) Jun 26 21:10:50.440: INFO: (19) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname2/proxy/: tls qux (200; 6.728062ms) Jun 26 21:10:50.440: INFO: (19) /api/v1/namespaces/proxy-9553/services/https:proxy-service-s58rz:tlsportname1/proxy/: tls baz (200; 6.729713ms) STEP: deleting ReplicationController proxy-service-s58rz in namespace proxy-9553, will wait for the garbage collector to delete the pods Jun 26 21:10:50.500: INFO: Deleting ReplicationController proxy-service-s58rz took: 7.328549ms Jun 26 21:10:50.800: INFO: Terminating ReplicationController proxy-service-s58rz pods took: 300.233523ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:10:53.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9553" for this suite. • [SLOW TEST:16.529 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":13,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:10:53.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-8596 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8596 to expose endpoints map[] Jun 26 21:10:53.825: INFO: Get endpoints failed (105.733747ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 26 21:10:54.828: INFO: successfully validated that service endpoint-test2 in namespace services-8596 exposes endpoints map[] (1.108947561s elapsed) STEP: Creating pod pod1 in namespace services-8596 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8596 to expose endpoints map[pod1:[80]] Jun 26 21:10:58.905: INFO: successfully validated that service endpoint-test2 in namespace services-8596 exposes endpoints map[pod1:[80]] (4.069422343s elapsed) STEP: Creating pod pod2 in namespace services-8596 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8596 to expose endpoints map[pod1:[80] pod2:[80]] Jun 26 21:11:03.110: INFO: successfully validated that service endpoint-test2 in namespace services-8596 exposes endpoints map[pod1:[80] pod2:[80]] (4.198477099s elapsed) STEP: Deleting pod pod1 in namespace services-8596 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8596 to expose endpoints map[pod2:[80]] Jun 26 21:11:04.171: INFO: successfully validated that service endpoint-test2 in namespace services-8596 exposes endpoints map[pod2:[80]] (1.057329098s elapsed) STEP: Deleting pod pod2 in namespace services-8596 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8596 to expose endpoints map[] Jun 26 21:11:05.236: INFO: successfully validated that service endpoint-test2 in namespace services-8596 exposes endpoints map[] (1.060436601s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:11:05.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8596" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.814 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":14,"skipped":341,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:11:05.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8763 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jun 26 21:11:05.614: INFO: Found 0 stateful pods, waiting for 3 Jun 26 21:11:15.620: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 21:11:15.620: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 21:11:15.620: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 26 21:11:15.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8763 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 21:11:18.497: INFO: stderr: "I0626 21:11:18.384027 72 log.go:172] (0xc000118f20) (0xc00023fae0) Create stream\nI0626 21:11:18.384082 72 log.go:172] (0xc000118f20) (0xc00023fae0) Stream added, broadcasting: 1\nI0626 21:11:18.386954 72 log.go:172] (0xc000118f20) Reply frame received for 1\nI0626 21:11:18.387007 72 log.go:172] (0xc000118f20) (0xc000706000) Create stream\nI0626 21:11:18.387025 72 log.go:172] (0xc000118f20) (0xc000706000) Stream added, broadcasting: 3\nI0626 21:11:18.388257 72 log.go:172] (0xc000118f20) Reply frame received for 3\nI0626 21:11:18.388312 72 log.go:172] (0xc000118f20) (0xc000710000) Create stream\nI0626 21:11:18.388327 72 log.go:172] (0xc000118f20) (0xc000710000) Stream added, broadcasting: 5\nI0626 21:11:18.389703 72 log.go:172] (0xc000118f20) Reply frame received for 5\nI0626 21:11:18.455079 72 log.go:172] (0xc000118f20) Data frame received for 5\nI0626 21:11:18.455124 72 log.go:172] (0xc000710000) (5) Data frame handling\nI0626 21:11:18.455151 72 log.go:172] (0xc000710000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 21:11:18.483320 72 log.go:172] (0xc000118f20) Data frame received for 3\nI0626 21:11:18.483351 72 log.go:172] (0xc000706000) (3) Data frame handling\nI0626 21:11:18.483367 72 log.go:172] (0xc000706000) (3) Data frame sent\nI0626 21:11:18.483762 72 log.go:172] (0xc000118f20) Data frame received for 5\nI0626 21:11:18.483774 72 log.go:172] (0xc000710000) (5) Data frame handling\nI0626 21:11:18.483790 72 log.go:172] (0xc000118f20) Data frame received for 3\nI0626 21:11:18.483796 72 log.go:172] (0xc000706000) (3) Data frame handling\nI0626 21:11:18.487119 72 log.go:172] (0xc000118f20) Data frame received for 1\nI0626 21:11:18.487152 72 log.go:172] (0xc00023fae0) (1) Data frame handling\nI0626 21:11:18.487176 72 log.go:172] (0xc00023fae0) (1) Data frame sent\nI0626 21:11:18.487207 72 log.go:172] (0xc000118f20) (0xc00023fae0) Stream removed, broadcasting: 1\nI0626 21:11:18.487234 72 log.go:172] (0xc000118f20) Go away received\nI0626 21:11:18.487855 72 log.go:172] (0xc000118f20) (0xc00023fae0) Stream removed, broadcasting: 1\nI0626 21:11:18.487881 72 log.go:172] (0xc000118f20) (0xc000706000) Stream removed, broadcasting: 3\nI0626 21:11:18.487894 72 log.go:172] (0xc000118f20) (0xc000710000) Stream removed, broadcasting: 5\n" Jun 26 21:11:18.497: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 21:11:18.497: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 26 21:11:28.530: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 26 21:11:38.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8763 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 21:11:38.782: INFO: stderr: "I0626 21:11:38.695424 105 log.go:172] (0xc00099c630) (0xc000976000) Create stream\nI0626 21:11:38.695484 105 log.go:172] (0xc00099c630) (0xc000976000) Stream added, broadcasting: 1\nI0626 21:11:38.698266 105 log.go:172] (0xc00099c630) Reply frame received for 1\nI0626 21:11:38.698313 105 log.go:172] (0xc00099c630) (0xc0009de000) Create stream\nI0626 21:11:38.698325 105 log.go:172] (0xc00099c630) (0xc0009de000) Stream added, broadcasting: 3\nI0626 21:11:38.699142 105 log.go:172] (0xc00099c630) Reply frame received for 3\nI0626 21:11:38.699178 105 log.go:172] (0xc00099c630) (0xc0009760a0) Create stream\nI0626 21:11:38.699190 105 log.go:172] (0xc00099c630) (0xc0009760a0) Stream added, broadcasting: 5\nI0626 21:11:38.700055 105 log.go:172] (0xc00099c630) Reply frame received for 5\nI0626 21:11:38.774233 105 log.go:172] (0xc00099c630) Data frame received for 3\nI0626 21:11:38.774279 105 log.go:172] (0xc0009de000) (3) Data frame handling\nI0626 21:11:38.774294 105 log.go:172] (0xc0009de000) (3) Data frame sent\nI0626 21:11:38.774304 105 log.go:172] (0xc00099c630) Data frame received for 3\nI0626 21:11:38.774312 105 log.go:172] (0xc0009de000) (3) Data frame handling\nI0626 21:11:38.774343 105 log.go:172] (0xc00099c630) Data frame received for 5\nI0626 21:11:38.774355 105 log.go:172] (0xc0009760a0) (5) Data frame handling\nI0626 21:11:38.774377 105 log.go:172] (0xc0009760a0) (5) Data frame sent\nI0626 21:11:38.774388 105 log.go:172] (0xc00099c630) Data frame received for 5\nI0626 21:11:38.774401 105 log.go:172] (0xc0009760a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 21:11:38.775705 105 log.go:172] (0xc00099c630) Data frame received for 1\nI0626 21:11:38.775729 105 log.go:172] (0xc000976000) (1) Data frame handling\nI0626 21:11:38.775744 105 log.go:172] (0xc000976000) (1) Data frame sent\nI0626 21:11:38.775757 105 log.go:172] (0xc00099c630) (0xc000976000) Stream removed, broadcasting: 1\nI0626 21:11:38.775771 105 log.go:172] (0xc00099c630) Go away received\nI0626 21:11:38.776104 105 log.go:172] (0xc00099c630) (0xc000976000) Stream removed, broadcasting: 1\nI0626 21:11:38.776121 105 log.go:172] (0xc00099c630) (0xc0009de000) Stream removed, broadcasting: 3\nI0626 21:11:38.776131 105 log.go:172] (0xc00099c630) (0xc0009760a0) Stream removed, broadcasting: 5\n" Jun 26 21:11:38.782: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 21:11:38.782: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 21:11:58.804: INFO: Waiting for StatefulSet statefulset-8763/ss2 to complete update Jun 26 21:11:58.804: INFO: Waiting for Pod statefulset-8763/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jun 26 21:12:08.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8763 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 21:12:09.106: INFO: stderr: "I0626 21:12:08.964895 128 log.go:172] (0xc0006f8a50) (0xc0006b41e0) Create stream\nI0626 21:12:08.964965 128 log.go:172] (0xc0006f8a50) (0xc0006b41e0) Stream added, broadcasting: 1\nI0626 21:12:08.968043 128 log.go:172] (0xc0006f8a50) Reply frame received for 1\nI0626 21:12:08.968087 128 log.go:172] (0xc0006f8a50) (0xc000578000) Create stream\nI0626 21:12:08.968113 128 log.go:172] (0xc0006f8a50) (0xc000578000) Stream added, broadcasting: 3\nI0626 21:12:08.968966 128 log.go:172] (0xc0006f8a50) Reply frame received for 3\nI0626 21:12:08.968998 128 log.go:172] (0xc0006f8a50) (0xc00078e960) Create stream\nI0626 21:12:08.969008 128 log.go:172] (0xc0006f8a50) (0xc00078e960) Stream added, broadcasting: 5\nI0626 21:12:08.969860 128 log.go:172] (0xc0006f8a50) Reply frame received for 5\nI0626 21:12:09.049548 128 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0626 21:12:09.049584 128 log.go:172] (0xc00078e960) (5) Data frame handling\nI0626 21:12:09.049715 128 log.go:172] (0xc00078e960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 21:12:09.096375 128 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0626 21:12:09.096416 128 log.go:172] (0xc000578000) (3) Data frame handling\nI0626 21:12:09.096486 128 log.go:172] (0xc000578000) (3) Data frame sent\nI0626 21:12:09.096745 128 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0626 21:12:09.096774 128 log.go:172] (0xc00078e960) (5) Data frame handling\nI0626 21:12:09.096803 128 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0626 21:12:09.096819 128 log.go:172] (0xc000578000) (3) Data frame handling\nI0626 21:12:09.098648 128 log.go:172] (0xc0006f8a50) Data frame received for 1\nI0626 21:12:09.098685 128 log.go:172] (0xc0006b41e0) (1) Data frame handling\nI0626 21:12:09.098705 128 log.go:172] (0xc0006b41e0) (1) Data frame sent\nI0626 21:12:09.098728 128 log.go:172] (0xc0006f8a50) (0xc0006b41e0) Stream removed, broadcasting: 1\nI0626 21:12:09.098905 128 log.go:172] (0xc0006f8a50) Go away received\nI0626 21:12:09.099310 128 log.go:172] (0xc0006f8a50) (0xc0006b41e0) Stream removed, broadcasting: 1\nI0626 21:12:09.099334 128 log.go:172] (0xc0006f8a50) (0xc000578000) Stream removed, broadcasting: 3\nI0626 21:12:09.099347 128 log.go:172] (0xc0006f8a50) (0xc00078e960) Stream removed, broadcasting: 5\n" Jun 26 21:12:09.106: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 21:12:09.106: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 21:12:19.138: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 26 21:12:29.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8763 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 21:12:29.469: INFO: stderr: "I0626 21:12:29.363057 155 log.go:172] (0xc000994b00) (0xc0007034a0) Create stream\nI0626 21:12:29.363100 155 log.go:172] (0xc000994b00) (0xc0007034a0) Stream added, broadcasting: 1\nI0626 21:12:29.365801 155 log.go:172] (0xc000994b00) Reply frame received for 1\nI0626 21:12:29.365848 155 log.go:172] (0xc000994b00) (0xc00071a3c0) Create stream\nI0626 21:12:29.365862 155 log.go:172] (0xc000994b00) (0xc00071a3c0) Stream added, broadcasting: 3\nI0626 21:12:29.366842 155 log.go:172] (0xc000994b00) Reply frame received for 3\nI0626 21:12:29.366893 155 log.go:172] (0xc000994b00) (0xc000729040) Create stream\nI0626 21:12:29.366910 155 log.go:172] (0xc000994b00) (0xc000729040) Stream added, broadcasting: 5\nI0626 21:12:29.367749 155 log.go:172] (0xc000994b00) Reply frame received for 5\nI0626 21:12:29.460840 155 log.go:172] (0xc000994b00) Data frame received for 5\nI0626 21:12:29.460897 155 log.go:172] (0xc000729040) (5) Data frame handling\nI0626 21:12:29.460917 155 log.go:172] (0xc000729040) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 21:12:29.460954 155 log.go:172] (0xc000994b00) Data frame received for 3\nI0626 21:12:29.460977 155 log.go:172] (0xc00071a3c0) (3) Data frame handling\nI0626 21:12:29.460989 155 log.go:172] (0xc00071a3c0) (3) Data frame sent\nI0626 21:12:29.461023 155 log.go:172] (0xc000994b00) Data frame received for 5\nI0626 21:12:29.461035 155 log.go:172] (0xc000729040) (5) Data frame handling\nI0626 21:12:29.461070 155 log.go:172] (0xc000994b00) Data frame received for 3\nI0626 21:12:29.461105 155 log.go:172] (0xc00071a3c0) (3) Data frame handling\nI0626 21:12:29.462881 155 log.go:172] (0xc000994b00) Data frame received for 1\nI0626 21:12:29.462908 155 log.go:172] (0xc0007034a0) (1) Data frame handling\nI0626 21:12:29.462925 155 log.go:172] (0xc0007034a0) (1) Data frame sent\nI0626 21:12:29.463067 155 log.go:172] (0xc000994b00) (0xc0007034a0) Stream removed, broadcasting: 1\nI0626 21:12:29.463268 155 log.go:172] (0xc000994b00) Go away received\nI0626 21:12:29.463472 155 log.go:172] (0xc000994b00) (0xc0007034a0) Stream removed, broadcasting: 1\nI0626 21:12:29.463493 155 log.go:172] (0xc000994b00) (0xc00071a3c0) Stream removed, broadcasting: 3\nI0626 21:12:29.463514 155 log.go:172] (0xc000994b00) (0xc000729040) Stream removed, broadcasting: 5\n" Jun 26 21:12:29.469: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 21:12:29.469: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 21:12:39.488: INFO: Waiting for StatefulSet statefulset-8763/ss2 to complete update Jun 26 21:12:39.488: INFO: Waiting for Pod statefulset-8763/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 26 21:12:39.488: INFO: Waiting for Pod statefulset-8763/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 26 21:12:59.498: INFO: Waiting for StatefulSet statefulset-8763/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 26 21:13:09.494: INFO: Deleting all statefulset in ns statefulset-8763 Jun 26 21:13:09.496: INFO: Scaling statefulset ss2 to 0 Jun 26 21:13:29.524: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 21:13:29.527: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:13:29.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8763" for this suite. • [SLOW TEST:144.194 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":15,"skipped":363,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:13:29.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:13:29.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30" in namespace "projected-4345" to be "success or failure" Jun 26 21:13:29.699: INFO: Pod "downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30": Phase="Pending", Reason="", readiness=false. Elapsed: 22.690343ms Jun 26 21:13:31.704: INFO: Pod "downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026980766s Jun 26 21:13:33.707: INFO: Pod "downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030879318s STEP: Saw pod success Jun 26 21:13:33.708: INFO: Pod "downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30" satisfied condition "success or failure" Jun 26 21:13:33.710: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30 container client-container: STEP: delete the pod Jun 26 21:13:33.793: INFO: Waiting for pod downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30 to disappear Jun 26 21:13:33.806: INFO: Pod downwardapi-volume-99368b9f-27f3-4838-93ba-bd9d9d817b30 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:13:33.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4345" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":375,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:13:33.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 26 21:13:33.875: INFO: Waiting up to 5m0s for pod "pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f" in namespace "emptydir-8378" to be "success or failure" Jun 26 21:13:33.879: INFO: Pod "pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832225ms Jun 26 21:13:35.960: INFO: Pod "pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084852965s Jun 26 21:13:37.964: INFO: Pod "pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08908524s STEP: Saw pod success Jun 26 21:13:37.964: INFO: Pod "pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f" satisfied condition "success or failure" Jun 26 21:13:37.967: INFO: Trying to get logs from node jerma-worker2 pod pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f container test-container: STEP: delete the pod Jun 26 21:13:38.044: INFO: Waiting for pod pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f to disappear Jun 26 21:13:38.061: INFO: Pod pod-7dd8a0fe-baad-429c-898d-a8e9b0431f3f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:13:38.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8378" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":382,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:13:38.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-619ec832-08a5-49f6-bc86-92bc86c56d00 in namespace container-probe-2304 Jun 26 21:13:42.393: INFO: Started pod test-webserver-619ec832-08a5-49f6-bc86-92bc86c56d00 in namespace container-probe-2304 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 21:13:42.396: INFO: Initial restart count of pod test-webserver-619ec832-08a5-49f6-bc86-92bc86c56d00 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:17:43.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2304" for this suite. • [SLOW TEST:244.997 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":385,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:17:43.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 26 21:17:43.280: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529624 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 26 21:17:43.281: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529624 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 26 21:17:53.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529664 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 26 21:17:53.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529664 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 26 21:18:03.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529694 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 26 21:18:03.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529694 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 26 21:18:13.305: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529724 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 26 21:18:13.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-a 84c137f4-8dc7-449e-9d66-8bc4a6b48f9a 27529724 0 2020-06-26 21:17:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 26 21:18:23.314: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-b 9c49fdc4-3290-400e-86a0-8558bcc77c85 27529754 0 2020-06-26 21:18:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 26 21:18:23.314: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-b 9c49fdc4-3290-400e-86a0-8558bcc77c85 27529754 0 2020-06-26 21:18:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 26 21:18:33.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-b 9c49fdc4-3290-400e-86a0-8558bcc77c85 27529782 0 2020-06-26 21:18:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 26 21:18:33.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-436 /api/v1/namespaces/watch-436/configmaps/e2e-watch-test-configmap-b 9c49fdc4-3290-400e-86a0-8558bcc77c85 27529782 0 2020-06-26 21:18:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:18:43.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-436" for this suite. • [SLOW TEST:60.266 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":19,"skipped":388,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:18:43.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1397 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1397 I0626 21:18:43.536776 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1397, replica count: 2 I0626 21:18:46.587239 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:18:49.587434 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 21:18:49.587: INFO: Creating new exec pod Jun 26 21:18:54.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1397 execpodl979r -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 26 21:18:54.939: INFO: stderr: "I0626 21:18:54.754710 175 log.go:172] (0xc0009a7080) (0xc0008d0820) Create stream\nI0626 21:18:54.754775 175 log.go:172] (0xc0009a7080) (0xc0008d0820) Stream added, broadcasting: 1\nI0626 21:18:54.760053 175 log.go:172] (0xc0009a7080) Reply frame received for 1\nI0626 21:18:54.760107 175 log.go:172] (0xc0009a7080) (0xc000528820) Create stream\nI0626 21:18:54.760129 175 log.go:172] (0xc0009a7080) (0xc000528820) Stream added, broadcasting: 3\nI0626 21:18:54.761271 175 log.go:172] (0xc0009a7080) Reply frame received for 3\nI0626 21:18:54.761319 175 log.go:172] (0xc0009a7080) (0xc000454820) Create stream\nI0626 21:18:54.761338 175 log.go:172] (0xc0009a7080) (0xc000454820) Stream added, broadcasting: 5\nI0626 21:18:54.762196 175 log.go:172] (0xc0009a7080) Reply frame received for 5\nI0626 21:18:54.902375 175 log.go:172] (0xc0009a7080) Data frame received for 5\nI0626 21:18:54.902413 175 log.go:172] (0xc000454820) (5) Data frame handling\nI0626 21:18:54.902449 175 log.go:172] (0xc000454820) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0626 21:18:54.929348 175 log.go:172] (0xc0009a7080) Data frame received for 3\nI0626 21:18:54.929396 175 log.go:172] (0xc000528820) (3) Data frame handling\nI0626 21:18:54.929426 175 log.go:172] (0xc0009a7080) Data frame received for 5\nI0626 21:18:54.929443 175 log.go:172] (0xc000454820) (5) Data frame handling\nI0626 21:18:54.929467 175 log.go:172] (0xc000454820) (5) Data frame sent\nI0626 21:18:54.929479 175 log.go:172] (0xc0009a7080) Data frame received for 5\nI0626 21:18:54.929486 175 log.go:172] (0xc000454820) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0626 21:18:54.930983 175 log.go:172] (0xc0009a7080) Data frame received for 1\nI0626 21:18:54.930995 175 log.go:172] (0xc0008d0820) (1) Data frame handling\nI0626 21:18:54.931002 175 log.go:172] (0xc0008d0820) (1) Data frame sent\nI0626 21:18:54.931071 175 log.go:172] (0xc0009a7080) (0xc0008d0820) Stream removed, broadcasting: 1\nI0626 21:18:54.931101 175 log.go:172] (0xc0009a7080) Go away received\nI0626 21:18:54.931567 175 log.go:172] (0xc0009a7080) (0xc0008d0820) Stream removed, broadcasting: 1\nI0626 21:18:54.931599 175 log.go:172] (0xc0009a7080) (0xc000528820) Stream removed, broadcasting: 3\nI0626 21:18:54.931613 175 log.go:172] (0xc0009a7080) (0xc000454820) Stream removed, broadcasting: 5\n" Jun 26 21:18:54.939: INFO: stdout: "" Jun 26 21:18:54.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1397 execpodl979r -- /bin/sh -x -c nc -zv -t -w 2 10.102.119.1 80' Jun 26 21:18:55.140: INFO: stderr: "I0626 21:18:55.062548 196 log.go:172] (0xc000af3130) (0xc000b66460) Create stream\nI0626 21:18:55.062609 196 log.go:172] (0xc000af3130) (0xc000b66460) Stream added, broadcasting: 1\nI0626 21:18:55.065267 196 log.go:172] (0xc000af3130) Reply frame received for 1\nI0626 21:18:55.065406 196 log.go:172] (0xc000af3130) (0xc0009b01e0) Create stream\nI0626 21:18:55.065452 196 log.go:172] (0xc000af3130) (0xc0009b01e0) Stream added, broadcasting: 3\nI0626 21:18:55.067036 196 log.go:172] (0xc000af3130) Reply frame received for 3\nI0626 21:18:55.067099 196 log.go:172] (0xc000af3130) (0xc0009b0280) Create stream\nI0626 21:18:55.067125 196 log.go:172] (0xc000af3130) (0xc0009b0280) Stream added, broadcasting: 5\nI0626 21:18:55.068251 196 log.go:172] (0xc000af3130) Reply frame received for 5\nI0626 21:18:55.131484 196 log.go:172] (0xc000af3130) Data frame received for 3\nI0626 21:18:55.131510 196 log.go:172] (0xc0009b01e0) (3) Data frame handling\nI0626 21:18:55.131541 196 log.go:172] (0xc000af3130) Data frame received for 5\nI0626 21:18:55.131569 196 log.go:172] (0xc0009b0280) (5) Data frame handling\nI0626 21:18:55.131590 196 log.go:172] (0xc0009b0280) (5) Data frame sent\nI0626 21:18:55.131605 196 log.go:172] (0xc000af3130) Data frame received for 5\nI0626 21:18:55.131613 196 log.go:172] (0xc0009b0280) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.119.1 80\nConnection to 10.102.119.1 80 port [tcp/http] succeeded!\nI0626 21:18:55.132992 196 log.go:172] (0xc000af3130) Data frame received for 1\nI0626 21:18:55.133016 196 log.go:172] (0xc000b66460) (1) Data frame handling\nI0626 21:18:55.133031 196 log.go:172] (0xc000b66460) (1) Data frame sent\nI0626 21:18:55.133101 196 log.go:172] (0xc000af3130) (0xc000b66460) Stream removed, broadcasting: 1\nI0626 21:18:55.133373 196 log.go:172] (0xc000af3130) Go away received\nI0626 21:18:55.133563 196 log.go:172] (0xc000af3130) (0xc000b66460) Stream removed, broadcasting: 1\nI0626 21:18:55.133579 196 log.go:172] (0xc000af3130) (0xc0009b01e0) Stream removed, broadcasting: 3\nI0626 21:18:55.133586 196 log.go:172] (0xc000af3130) (0xc0009b0280) Stream removed, broadcasting: 5\n" Jun 26 21:18:55.141: INFO: stdout: "" Jun 26 21:18:55.141: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:18:55.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1397" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.850 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":20,"skipped":400,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:18:55.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-f47c0d88-f8a3-4b07-bfb9-d4f70cb316c9 in namespace container-probe-3889 Jun 26 21:18:59.273: INFO: Started pod busybox-f47c0d88-f8a3-4b07-bfb9-d4f70cb316c9 in namespace container-probe-3889 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 21:18:59.276: INFO: Initial restart count of pod busybox-f47c0d88-f8a3-4b07-bfb9-d4f70cb316c9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:22:59.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3889" for this suite. • [SLOW TEST:244.810 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:22:59.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:23:00.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2286" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":421,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:23:00.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4419 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-4419 Jun 26 21:23:00.358: INFO: Found 0 stateful pods, waiting for 1 Jun 26 21:23:10.363: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 26 21:23:10.382: INFO: Deleting all statefulset in ns statefulset-4419 Jun 26 21:23:10.388: INFO: Scaling statefulset ss to 0 Jun 26 21:23:30.458: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 21:23:30.460: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:23:30.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4419" for this suite. • [SLOW TEST:30.251 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":23,"skipped":423,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:23:30.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:23:31.000: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:23:33.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803411, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803411, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803411, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803410, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:23:36.047: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 26 21:23:40.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-7434 to-be-attached-pod -i -c=container1' Jun 26 21:23:43.098: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:23:43.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7434" for this suite. STEP: Destroying namespace "webhook-7434-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.752 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":24,"skipped":444,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:23:43.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:23:43.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 26 21:23:43.941: INFO: stderr: "" Jun 26 21:23:43.941: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:28:04Z\", GoVersion:\"go1.13.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:23:43.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3916" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":25,"skipped":446,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:23:43.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 21:23:44.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9053' Jun 26 21:23:44.365: INFO: stderr: "" Jun 26 21:23:44.365: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 26 21:23:49.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9053 -o json' Jun 26 21:23:49.529: INFO: stderr: "" Jun 26 21:23:49.529: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-26T21:23:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9053\",\n \"resourceVersion\": \"27530959\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9053/pods/e2e-test-httpd-pod\",\n \"uid\": \"378d9eb3-e0c3-4977-b5a5-e54774256949\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zrfsr\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zrfsr\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zrfsr\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T21:23:44Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T21:23:46Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T21:23:46Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T21:23:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://69717b369070bfacda32841556f907f0541a8bc561e56e1b6700d0008a12cbc3\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-26T21:23:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.32\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.32\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-26T21:23:44Z\"\n }\n}\n" STEP: replace the image in the pod Jun 26 21:23:49.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9053' Jun 26 21:23:49.820: INFO: stderr: "" Jun 26 21:23:49.820: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Jun 26 21:23:49.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9053' Jun 26 21:23:59.306: INFO: stderr: "" Jun 26 21:23:59.306: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:23:59.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9053" for this suite. • [SLOW TEST:15.365 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":26,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:23:59.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-0543d705-22ad-4483-93ae-c81f0044f274 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:23:59.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6634" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":27,"skipped":488,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:23:59.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 21:23:59.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5606' Jun 26 21:23:59.602: INFO: stderr: "" Jun 26 21:23:59.602: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Jun 26 21:23:59.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5606' Jun 26 21:24:09.482: INFO: stderr: "" Jun 26 21:24:09.482: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:24:09.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5606" for this suite. • [SLOW TEST:10.042 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":28,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:24:09.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:24:20.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2420" for this suite. • [SLOW TEST:11.228 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":29,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:24:20.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 21:24:20.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9027' Jun 26 21:24:20.888: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 26 21:24:20.888: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Jun 26 21:24:20.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9027' Jun 26 21:24:21.045: INFO: stderr: "" Jun 26 21:24:21.045: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:24:21.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9027" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":30,"skipped":565,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:24:21.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-2f9304d7-bbe9-4139-bca4-3ff48527e2a4 STEP: Creating a pod to test consume configMaps Jun 26 21:24:21.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079" in namespace "projected-5934" to be "success or failure" Jun 26 21:24:21.190: INFO: Pod "pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079": Phase="Pending", Reason="", readiness=false. Elapsed: 29.505302ms Jun 26 21:24:23.198: INFO: Pod "pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037992114s Jun 26 21:24:25.208: INFO: Pod "pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048371337s STEP: Saw pod success Jun 26 21:24:25.209: INFO: Pod "pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079" satisfied condition "success or failure" Jun 26 21:24:25.213: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079 container projected-configmap-volume-test: STEP: delete the pod Jun 26 21:24:25.284: INFO: Waiting for pod pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079 to disappear Jun 26 21:24:25.290: INFO: Pod pod-projected-configmaps-6efd026e-f1ca-4e23-9b49-79dac97cb079 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:24:25.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5934" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":576,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:24:25.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 26 21:24:25.654: INFO: Waiting up to 5m0s for pod "pod-fa0966af-2b70-4473-9341-dbb526c80a65" in namespace "emptydir-2071" to be "success or failure" Jun 26 21:24:25.656: INFO: Pod "pod-fa0966af-2b70-4473-9341-dbb526c80a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295657ms Jun 26 21:24:27.659: INFO: Pod "pod-fa0966af-2b70-4473-9341-dbb526c80a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00556162s Jun 26 21:24:29.664: INFO: Pod "pod-fa0966af-2b70-4473-9341-dbb526c80a65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009900715s STEP: Saw pod success Jun 26 21:24:29.664: INFO: Pod "pod-fa0966af-2b70-4473-9341-dbb526c80a65" satisfied condition "success or failure" Jun 26 21:24:29.667: INFO: Trying to get logs from node jerma-worker pod pod-fa0966af-2b70-4473-9341-dbb526c80a65 container test-container: STEP: delete the pod Jun 26 21:24:29.703: INFO: Waiting for pod pod-fa0966af-2b70-4473-9341-dbb526c80a65 to disappear Jun 26 21:24:29.707: INFO: Pod pod-fa0966af-2b70-4473-9341-dbb526c80a65 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:24:29.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2071" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":583,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:24:29.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:24:29.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 26 21:24:30.366: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-26T21:24:30Z generation:1 name:name1 resourceVersion:27531259 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7cacf024-fca4-429b-8664-58c1d111450e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 26 21:24:40.371: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-26T21:24:40Z generation:1 name:name2 resourceVersion:27531306 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5aefd684-7bb1-456b-90cf-23e374c34c02] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 26 21:24:50.378: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-26T21:24:30Z generation:2 name:name1 resourceVersion:27531336 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7cacf024-fca4-429b-8664-58c1d111450e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 26 21:25:00.384: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-26T21:24:40Z generation:2 name:name2 resourceVersion:27531366 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5aefd684-7bb1-456b-90cf-23e374c34c02] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 26 21:25:10.392: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-26T21:24:30Z generation:2 name:name1 resourceVersion:27531396 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7cacf024-fca4-429b-8664-58c1d111450e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 26 21:25:20.399: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-26T21:24:40Z generation:2 name:name2 resourceVersion:27531426 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5aefd684-7bb1-456b-90cf-23e374c34c02] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:25:30.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3729" for this suite. • [SLOW TEST:61.213 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":33,"skipped":590,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:25:30.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:25:38.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5335" for this suite. • [SLOW TEST:7.087 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":34,"skipped":591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:25:38.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2886 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jun 26 21:25:38.086: INFO: Found 0 stateful pods, waiting for 3 Jun 26 21:25:48.092: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 21:25:48.092: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 21:25:48.092: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 26 21:25:48.118: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 26 21:25:58.168: INFO: Updating stateful set ss2 Jun 26 21:25:58.209: INFO: Waiting for Pod statefulset-2886/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 21:26:08.217: INFO: Waiting for Pod statefulset-2886/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jun 26 21:26:18.691: INFO: Found 2 stateful pods, waiting for 3 Jun 26 21:26:28.695: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 21:26:28.695: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 21:26:28.695: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 26 21:26:28.718: INFO: Updating stateful set ss2 Jun 26 21:26:28.754: INFO: Waiting for Pod statefulset-2886/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 21:26:38.780: INFO: Updating stateful set ss2 Jun 26 21:26:38.808: INFO: Waiting for StatefulSet statefulset-2886/ss2 to complete update Jun 26 21:26:38.808: INFO: Waiting for Pod statefulset-2886/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 21:26:48.816: INFO: Waiting for StatefulSet statefulset-2886/ss2 to complete update Jun 26 21:26:48.816: INFO: Waiting for Pod statefulset-2886/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 26 21:26:58.839: INFO: Deleting all statefulset in ns statefulset-2886 Jun 26 21:26:58.842: INFO: Scaling statefulset ss2 to 0 Jun 26 21:27:18.860: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 21:27:18.863: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:27:18.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2886" for this suite. • [SLOW TEST:100.867 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":35,"skipped":628,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:27:18.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0626 21:27:19.690534 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 21:27:19.690: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:27:19.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-831" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":36,"skipped":629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:27:19.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:27:19.731: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:27:20.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-953" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":37,"skipped":671,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:27:20.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:27:21.033: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:27:23.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:27:25.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803641, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:27:28.105: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:27:28.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6906-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:27:29.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3335" for this suite. STEP: Destroying namespace "webhook-3335-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.130 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":38,"skipped":674,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:27:29.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 26 21:27:29.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:27:29.622: INFO: Number of nodes with available pods: 0 Jun 26 21:27:29.622: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:27:30.628: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:27:30.632: INFO: Number of nodes with available pods: 0 Jun 26 21:27:30.632: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:27:31.627: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:27:31.631: INFO: Number of nodes with available pods: 0 Jun 26 21:27:31.631: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:27:32.751: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:27:32.758: INFO: Number of nodes with available pods: 0 Jun 26 21:27:32.758: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:27:33.627: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:27:33.630: INFO: Number of nodes with available pods: 2 Jun 26 21:27:33.630: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 26 21:27:33.647: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:27:33.744: INFO: Number of nodes with available pods: 2 Jun 26 21:27:33.744: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7654, will wait for the garbage collector to delete the pods Jun 26 21:27:34.937: INFO: Deleting DaemonSet.extensions daemon-set took: 29.379162ms Jun 26 21:27:35.137: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.21217ms Jun 26 21:27:49.541: INFO: Number of nodes with available pods: 0 Jun 26 21:27:49.541: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 21:27:49.544: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7654/daemonsets","resourceVersion":"27532338"},"items":null} Jun 26 21:27:49.546: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7654/pods","resourceVersion":"27532338"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:27:49.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7654" for this suite. • [SLOW TEST:20.070 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":39,"skipped":686,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:27:49.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 26 21:27:49.619: INFO: Waiting up to 5m0s for pod "pod-0a001180-32cf-4fec-af10-179032218c0a" in namespace "emptydir-8926" to be "success or failure" Jun 26 21:27:49.640: INFO: Pod "pod-0a001180-32cf-4fec-af10-179032218c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.908655ms Jun 26 21:27:51.644: INFO: Pod "pod-0a001180-32cf-4fec-af10-179032218c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025344218s Jun 26 21:27:53.648: INFO: Pod "pod-0a001180-32cf-4fec-af10-179032218c0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029514768s STEP: Saw pod success Jun 26 21:27:53.648: INFO: Pod "pod-0a001180-32cf-4fec-af10-179032218c0a" satisfied condition "success or failure" Jun 26 21:27:53.652: INFO: Trying to get logs from node jerma-worker pod pod-0a001180-32cf-4fec-af10-179032218c0a container test-container: STEP: delete the pod Jun 26 21:27:53.745: INFO: Waiting for pod pod-0a001180-32cf-4fec-af10-179032218c0a to disappear Jun 26 21:27:53.755: INFO: Pod pod-0a001180-32cf-4fec-af10-179032218c0a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:27:53.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8926" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":691,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:27:53.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:27:57.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2727" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:27:57.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 26 21:27:58.791: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 26 21:28:00.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803678, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728803678, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:28:03.829: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:28:03.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:28:05.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3900" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.205 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":42,"skipped":794,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:28:05.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1807 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 21:28:05.190: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 26 21:28:25.356: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.43:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1807 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 21:28:25.356: INFO: >>> kubeConfig: /root/.kube/config I0626 21:28:25.396259 6 log.go:172] (0xc0017a9c30) (0xc00152cd20) Create stream I0626 21:28:25.396291 6 log.go:172] (0xc0017a9c30) (0xc00152cd20) Stream added, broadcasting: 1 I0626 21:28:25.399005 6 log.go:172] (0xc0017a9c30) Reply frame received for 1 I0626 21:28:25.399041 6 log.go:172] (0xc0017a9c30) (0xc00236e500) Create stream I0626 21:28:25.399054 6 log.go:172] (0xc0017a9c30) (0xc00236e500) Stream added, broadcasting: 3 I0626 21:28:25.400003 6 log.go:172] (0xc0017a9c30) Reply frame received for 3 I0626 21:28:25.400056 6 log.go:172] (0xc0017a9c30) (0xc00152cdc0) Create stream I0626 21:28:25.400072 6 log.go:172] (0xc0017a9c30) (0xc00152cdc0) Stream added, broadcasting: 5 I0626 21:28:25.401098 6 log.go:172] (0xc0017a9c30) Reply frame received for 5 I0626 21:28:25.612025 6 log.go:172] (0xc0017a9c30) Data frame received for 3 I0626 21:28:25.612048 6 log.go:172] (0xc00236e500) (3) Data frame handling I0626 21:28:25.612059 6 log.go:172] (0xc00236e500) (3) Data frame sent I0626 21:28:25.612064 6 log.go:172] (0xc0017a9c30) Data frame received for 3 I0626 21:28:25.612070 6 log.go:172] (0xc00236e500) (3) Data frame handling I0626 21:28:25.612288 6 log.go:172] (0xc0017a9c30) Data frame received for 5 I0626 21:28:25.612298 6 log.go:172] (0xc00152cdc0) (5) Data frame handling I0626 21:28:25.614620 6 log.go:172] (0xc0017a9c30) Data frame received for 1 I0626 21:28:25.614656 6 log.go:172] (0xc00152cd20) (1) Data frame handling I0626 21:28:25.614704 6 log.go:172] (0xc00152cd20) (1) Data frame sent I0626 21:28:25.614721 6 log.go:172] (0xc0017a9c30) (0xc00152cd20) Stream removed, broadcasting: 1 I0626 21:28:25.614820 6 log.go:172] (0xc0017a9c30) (0xc00152cd20) Stream removed, broadcasting: 1 I0626 21:28:25.614840 6 log.go:172] (0xc0017a9c30) (0xc00236e500) Stream removed, broadcasting: 3 I0626 21:28:25.614852 6 log.go:172] (0xc0017a9c30) (0xc00152cdc0) Stream removed, broadcasting: 5 Jun 26 21:28:25.614: INFO: Found all expected endpoints: [netserver-0] I0626 21:28:25.614907 6 log.go:172] (0xc0017a9c30) Go away received Jun 26 21:28:25.617: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.130:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1807 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 21:28:25.617: INFO: >>> kubeConfig: /root/.kube/config I0626 21:28:25.647106 6 log.go:172] (0xc0028ac370) (0xc00152d4a0) Create stream I0626 21:28:25.647142 6 log.go:172] (0xc0028ac370) (0xc00152d4a0) Stream added, broadcasting: 1 I0626 21:28:25.649844 6 log.go:172] (0xc0028ac370) Reply frame received for 1 I0626 21:28:25.649881 6 log.go:172] (0xc0028ac370) (0xc00152d540) Create stream I0626 21:28:25.649894 6 log.go:172] (0xc0028ac370) (0xc00152d540) Stream added, broadcasting: 3 I0626 21:28:25.650879 6 log.go:172] (0xc0028ac370) Reply frame received for 3 I0626 21:28:25.650952 6 log.go:172] (0xc0028ac370) (0xc001e2e0a0) Create stream I0626 21:28:25.650977 6 log.go:172] (0xc0028ac370) (0xc001e2e0a0) Stream added, broadcasting: 5 I0626 21:28:25.652712 6 log.go:172] (0xc0028ac370) Reply frame received for 5 I0626 21:28:25.724158 6 log.go:172] (0xc0028ac370) Data frame received for 3 I0626 21:28:25.724197 6 log.go:172] (0xc00152d540) (3) Data frame handling I0626 21:28:25.724221 6 log.go:172] (0xc00152d540) (3) Data frame sent I0626 21:28:25.724239 6 log.go:172] (0xc0028ac370) Data frame received for 3 I0626 21:28:25.724253 6 log.go:172] (0xc00152d540) (3) Data frame handling I0626 21:28:25.724322 6 log.go:172] (0xc0028ac370) Data frame received for 5 I0626 21:28:25.724355 6 log.go:172] (0xc001e2e0a0) (5) Data frame handling I0626 21:28:25.725854 6 log.go:172] (0xc0028ac370) Data frame received for 1 I0626 21:28:25.725875 6 log.go:172] (0xc00152d4a0) (1) Data frame handling I0626 21:28:25.725888 6 log.go:172] (0xc00152d4a0) (1) Data frame sent I0626 21:28:25.725904 6 log.go:172] (0xc0028ac370) (0xc00152d4a0) Stream removed, broadcasting: 1 I0626 21:28:25.725921 6 log.go:172] (0xc0028ac370) Go away received I0626 21:28:25.726040 6 log.go:172] (0xc0028ac370) (0xc00152d4a0) Stream removed, broadcasting: 1 I0626 21:28:25.726058 6 log.go:172] (0xc0028ac370) (0xc00152d540) Stream removed, broadcasting: 3 I0626 21:28:25.726067 6 log.go:172] (0xc0028ac370) (0xc001e2e0a0) Stream removed, broadcasting: 5 Jun 26 21:28:25.726: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:28:25.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1807" for this suite. • [SLOW TEST:20.604 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":803,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:28:25.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 26 21:28:25.830: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 21:28:25.863: INFO: Waiting for terminating namespaces to be deleted... Jun 26 21:28:25.869: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 26 21:28:25.874: INFO: host-test-container-pod from pod-network-test-1807 started at 2020-06-26 21:28:21 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.874: INFO: Container agnhost ready: true, restart count 0 Jun 26 21:28:25.874: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.874: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:28:25.874: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.874: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:28:25.874: INFO: netserver-0 from pod-network-test-1807 started at 2020-06-26 21:28:05 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.874: INFO: Container webserver ready: true, restart count 0 Jun 26 21:28:25.874: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 26 21:28:25.885: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.885: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:28:25.885: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.885: INFO: Container kube-bench ready: false, restart count 0 Jun 26 21:28:25.885: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.885: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:28:25.885: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.885: INFO: Container kube-hunter ready: false, restart count 0 Jun 26 21:28:25.885: INFO: netserver-1 from pod-network-test-1807 started at 2020-06-26 21:28:05 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.885: INFO: Container webserver ready: true, restart count 0 Jun 26 21:28:25.885: INFO: test-container-pod from pod-network-test-1807 started at 2020-06-26 21:28:21 +0000 UTC (1 container statuses recorded) Jun 26 21:28:25.885: INFO: Container webserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161c353d61109dd2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:28:26.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1627" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":44,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:28:26.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-7775 STEP: creating replication controller nodeport-test in namespace services-7775 I0626 21:28:27.206105 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7775, replica count: 2 I0626 21:28:30.256498 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:28:33.256768 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 21:28:33.256: INFO: Creating new exec pod Jun 26 21:28:38.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7775 execpodxpxgs -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jun 26 21:28:38.531: INFO: stderr: "I0626 21:28:38.416936 434 log.go:172] (0xc0008c29a0) (0xc0009301e0) Create stream\nI0626 21:28:38.417002 434 log.go:172] (0xc0008c29a0) (0xc0009301e0) Stream added, broadcasting: 1\nI0626 21:28:38.419493 434 log.go:172] (0xc0008c29a0) Reply frame received for 1\nI0626 21:28:38.419546 434 log.go:172] (0xc0008c29a0) (0xc000685ae0) Create stream\nI0626 21:28:38.419559 434 log.go:172] (0xc0008c29a0) (0xc000685ae0) Stream added, broadcasting: 3\nI0626 21:28:38.420487 434 log.go:172] (0xc0008c29a0) Reply frame received for 3\nI0626 21:28:38.420526 434 log.go:172] (0xc0008c29a0) (0xc000735900) Create stream\nI0626 21:28:38.420542 434 log.go:172] (0xc0008c29a0) (0xc000735900) Stream added, broadcasting: 5\nI0626 21:28:38.421798 434 log.go:172] (0xc0008c29a0) Reply frame received for 5\nI0626 21:28:38.490626 434 log.go:172] (0xc0008c29a0) Data frame received for 5\nI0626 21:28:38.490657 434 log.go:172] (0xc000735900) (5) Data frame handling\nI0626 21:28:38.490673 434 log.go:172] (0xc000735900) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0626 21:28:38.520602 434 log.go:172] (0xc0008c29a0) Data frame received for 5\nI0626 21:28:38.520651 434 log.go:172] (0xc000735900) (5) Data frame handling\nI0626 21:28:38.520686 434 log.go:172] (0xc000735900) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0626 21:28:38.520958 434 log.go:172] (0xc0008c29a0) Data frame received for 3\nI0626 21:28:38.520991 434 log.go:172] (0xc000685ae0) (3) Data frame handling\nI0626 21:28:38.521013 434 log.go:172] (0xc0008c29a0) Data frame received for 5\nI0626 21:28:38.521024 434 log.go:172] (0xc000735900) (5) Data frame handling\nI0626 21:28:38.523127 434 log.go:172] (0xc0008c29a0) Data frame received for 1\nI0626 21:28:38.523173 434 log.go:172] (0xc0009301e0) (1) Data frame handling\nI0626 21:28:38.523191 434 log.go:172] (0xc0009301e0) (1) Data frame sent\nI0626 21:28:38.523207 434 log.go:172] (0xc0008c29a0) (0xc0009301e0) Stream removed, broadcasting: 1\nI0626 21:28:38.523298 434 log.go:172] (0xc0008c29a0) Go away received\nI0626 21:28:38.523738 434 log.go:172] (0xc0008c29a0) (0xc0009301e0) Stream removed, broadcasting: 1\nI0626 21:28:38.523763 434 log.go:172] (0xc0008c29a0) (0xc000685ae0) Stream removed, broadcasting: 3\nI0626 21:28:38.523777 434 log.go:172] (0xc0008c29a0) (0xc000735900) Stream removed, broadcasting: 5\n" Jun 26 21:28:38.531: INFO: stdout: "" Jun 26 21:28:38.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7775 execpodxpxgs -- /bin/sh -x -c nc -zv -t -w 2 10.103.53.227 80' Jun 26 21:28:38.761: INFO: stderr: "I0626 21:28:38.674413 455 log.go:172] (0xc000137080) (0xc0007ae1e0) Create stream\nI0626 21:28:38.674479 455 log.go:172] (0xc000137080) (0xc0007ae1e0) Stream added, broadcasting: 1\nI0626 21:28:38.677647 455 log.go:172] (0xc000137080) Reply frame received for 1\nI0626 21:28:38.677690 455 log.go:172] (0xc000137080) (0xc0007ae320) Create stream\nI0626 21:28:38.677707 455 log.go:172] (0xc000137080) (0xc0007ae320) Stream added, broadcasting: 3\nI0626 21:28:38.678517 455 log.go:172] (0xc000137080) Reply frame received for 3\nI0626 21:28:38.678559 455 log.go:172] (0xc000137080) (0xc00060d4a0) Create stream\nI0626 21:28:38.678577 455 log.go:172] (0xc000137080) (0xc00060d4a0) Stream added, broadcasting: 5\nI0626 21:28:38.679652 455 log.go:172] (0xc000137080) Reply frame received for 5\nI0626 21:28:38.752518 455 log.go:172] (0xc000137080) Data frame received for 3\nI0626 21:28:38.752555 455 log.go:172] (0xc0007ae320) (3) Data frame handling\nI0626 21:28:38.752610 455 log.go:172] (0xc000137080) Data frame received for 5\nI0626 21:28:38.752661 455 log.go:172] (0xc00060d4a0) (5) Data frame handling\nI0626 21:28:38.752711 455 log.go:172] (0xc00060d4a0) (5) Data frame sent\nI0626 21:28:38.752762 455 log.go:172] (0xc000137080) Data frame received for 5\nI0626 21:28:38.752780 455 log.go:172] (0xc00060d4a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.53.227 80\nConnection to 10.103.53.227 80 port [tcp/http] succeeded!\nI0626 21:28:38.754352 455 log.go:172] (0xc000137080) Data frame received for 1\nI0626 21:28:38.754378 455 log.go:172] (0xc0007ae1e0) (1) Data frame handling\nI0626 21:28:38.754395 455 log.go:172] (0xc0007ae1e0) (1) Data frame sent\nI0626 21:28:38.754429 455 log.go:172] (0xc000137080) (0xc0007ae1e0) Stream removed, broadcasting: 1\nI0626 21:28:38.754456 455 log.go:172] (0xc000137080) Go away received\nI0626 21:28:38.754956 455 log.go:172] (0xc000137080) (0xc0007ae1e0) Stream removed, broadcasting: 1\nI0626 21:28:38.754979 455 log.go:172] (0xc000137080) (0xc0007ae320) Stream removed, broadcasting: 3\nI0626 21:28:38.754992 455 log.go:172] (0xc000137080) (0xc00060d4a0) Stream removed, broadcasting: 5\n" Jun 26 21:28:38.761: INFO: stdout: "" Jun 26 21:28:38.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7775 execpodxpxgs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32055' Jun 26 21:28:38.966: INFO: stderr: "I0626 21:28:38.906030 476 log.go:172] (0xc000a9a000) (0xc0008c6000) Create stream\nI0626 21:28:38.906129 476 log.go:172] (0xc000a9a000) (0xc0008c6000) Stream added, broadcasting: 1\nI0626 21:28:38.910362 476 log.go:172] (0xc000a9a000) Reply frame received for 1\nI0626 21:28:38.910416 476 log.go:172] (0xc000a9a000) (0xc000af6000) Create stream\nI0626 21:28:38.910442 476 log.go:172] (0xc000a9a000) (0xc000af6000) Stream added, broadcasting: 3\nI0626 21:28:38.911517 476 log.go:172] (0xc000a9a000) Reply frame received for 3\nI0626 21:28:38.911566 476 log.go:172] (0xc000a9a000) (0xc0008c60a0) Create stream\nI0626 21:28:38.911580 476 log.go:172] (0xc000a9a000) (0xc0008c60a0) Stream added, broadcasting: 5\nI0626 21:28:38.912669 476 log.go:172] (0xc000a9a000) Reply frame received for 5\nI0626 21:28:38.958363 476 log.go:172] (0xc000a9a000) Data frame received for 3\nI0626 21:28:38.958410 476 log.go:172] (0xc000af6000) (3) Data frame handling\nI0626 21:28:38.958459 476 log.go:172] (0xc000a9a000) Data frame received for 5\nI0626 21:28:38.958495 476 log.go:172] (0xc0008c60a0) (5) Data frame handling\nI0626 21:28:38.958523 476 log.go:172] (0xc0008c60a0) (5) Data frame sent\nI0626 21:28:38.958543 476 log.go:172] (0xc000a9a000) Data frame received for 5\nI0626 21:28:38.958560 476 log.go:172] (0xc0008c60a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32055\nConnection to 172.17.0.10 32055 port [tcp/32055] succeeded!\nI0626 21:28:38.960051 476 log.go:172] (0xc000a9a000) Data frame received for 1\nI0626 21:28:38.960070 476 log.go:172] (0xc0008c6000) (1) Data frame handling\nI0626 21:28:38.960080 476 log.go:172] (0xc0008c6000) (1) Data frame sent\nI0626 21:28:38.960092 476 log.go:172] (0xc000a9a000) (0xc0008c6000) Stream removed, broadcasting: 1\nI0626 21:28:38.960107 476 log.go:172] (0xc000a9a000) Go away received\nI0626 21:28:38.960465 476 log.go:172] (0xc000a9a000) (0xc0008c6000) Stream removed, broadcasting: 1\nI0626 21:28:38.960485 476 log.go:172] (0xc000a9a000) (0xc000af6000) Stream removed, broadcasting: 3\nI0626 21:28:38.960496 476 log.go:172] (0xc000a9a000) (0xc0008c60a0) Stream removed, broadcasting: 5\n" Jun 26 21:28:38.966: INFO: stdout: "" Jun 26 21:28:38.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7775 execpodxpxgs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32055' Jun 26 21:28:39.168: INFO: stderr: "I0626 21:28:39.104471 496 log.go:172] (0xc000116f20) (0xc000a66000) Create stream\nI0626 21:28:39.104521 496 log.go:172] (0xc000116f20) (0xc000a66000) Stream added, broadcasting: 1\nI0626 21:28:39.107727 496 log.go:172] (0xc000116f20) Reply frame received for 1\nI0626 21:28:39.107774 496 log.go:172] (0xc000116f20) (0xc000950000) Create stream\nI0626 21:28:39.107789 496 log.go:172] (0xc000116f20) (0xc000950000) Stream added, broadcasting: 3\nI0626 21:28:39.108774 496 log.go:172] (0xc000116f20) Reply frame received for 3\nI0626 21:28:39.108830 496 log.go:172] (0xc000116f20) (0xc000683a40) Create stream\nI0626 21:28:39.108845 496 log.go:172] (0xc000116f20) (0xc000683a40) Stream added, broadcasting: 5\nI0626 21:28:39.110118 496 log.go:172] (0xc000116f20) Reply frame received for 5\nI0626 21:28:39.158190 496 log.go:172] (0xc000116f20) Data frame received for 5\nI0626 21:28:39.158252 496 log.go:172] (0xc000683a40) (5) Data frame handling\nI0626 21:28:39.158289 496 log.go:172] (0xc000683a40) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32055\nI0626 21:28:39.159620 496 log.go:172] (0xc000116f20) Data frame received for 5\nI0626 21:28:39.159638 496 log.go:172] (0xc000683a40) (5) Data frame handling\nI0626 21:28:39.159649 496 log.go:172] (0xc000683a40) (5) Data frame sent\nConnection to 172.17.0.8 32055 port [tcp/32055] succeeded!\nI0626 21:28:39.159973 496 log.go:172] (0xc000116f20) Data frame received for 5\nI0626 21:28:39.159993 496 log.go:172] (0xc000683a40) (5) Data frame handling\nI0626 21:28:39.160257 496 log.go:172] (0xc000116f20) Data frame received for 3\nI0626 21:28:39.160269 496 log.go:172] (0xc000950000) (3) Data frame handling\nI0626 21:28:39.162508 496 log.go:172] (0xc000116f20) Data frame received for 1\nI0626 21:28:39.162533 496 log.go:172] (0xc000a66000) (1) Data frame handling\nI0626 21:28:39.162550 496 log.go:172] (0xc000a66000) (1) Data frame sent\nI0626 21:28:39.162571 496 log.go:172] (0xc000116f20) (0xc000a66000) Stream removed, broadcasting: 1\nI0626 21:28:39.162592 496 log.go:172] (0xc000116f20) Go away received\nI0626 21:28:39.162999 496 log.go:172] (0xc000116f20) (0xc000a66000) Stream removed, broadcasting: 1\nI0626 21:28:39.163018 496 log.go:172] (0xc000116f20) (0xc000950000) Stream removed, broadcasting: 3\nI0626 21:28:39.163025 496 log.go:172] (0xc000116f20) (0xc000683a40) Stream removed, broadcasting: 5\n" Jun 26 21:28:39.168: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:28:39.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7775" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.241 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":45,"skipped":840,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:28:39.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 21:28:43.358: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:28:43.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2091" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":843,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:28:43.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:28:47.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9500" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":865,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:28:47.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 26 21:28:47.589: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 21:28:47.627: INFO: Waiting for terminating namespaces to be deleted... Jun 26 21:28:47.630: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 26 21:28:47.634: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:47.634: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:28:47.634: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:47.634: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:28:47.634: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 26 21:28:47.639: INFO: busybox-readonly-fsd20e3808-dda9-4251-8263-cadb62aaee99 from kubelet-test-9500 started at 2020-06-26 21:28:43 +0000 UTC (1 container statuses recorded) Jun 26 21:28:47.639: INFO: Container busybox-readonly-fsd20e3808-dda9-4251-8263-cadb62aaee99 ready: true, restart count 0 Jun 26 21:28:47.639: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:47.639: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:28:47.639: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 26 21:28:47.639: INFO: Container kube-bench ready: false, restart count 0 Jun 26 21:28:47.639: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:47.639: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:28:47.639: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 26 21:28:47.639: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Jun 26 21:28:47.967: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Jun 26 21:28:47.967: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Jun 26 21:28:47.967: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Jun 26 21:28:47.967: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 Jun 26 21:28:47.967: INFO: Pod busybox-readonly-fsd20e3808-dda9-4251-8263-cadb62aaee99 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jun 26 21:28:47.967: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Jun 26 21:28:47.973: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6182a37c-a5f6-462d-a40c-b6058e9d3b6e.161c35428c6644cc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1479/filler-pod-6182a37c-a5f6-462d-a40c-b6058e9d3b6e to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6182a37c-a5f6-462d-a40c-b6058e9d3b6e.161c354308a38de9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6182a37c-a5f6-462d-a40c-b6058e9d3b6e.161c3543505bdef8], Reason = [Created], Message = [Created container filler-pod-6182a37c-a5f6-462d-a40c-b6058e9d3b6e] STEP: Considering event: Type = [Normal], Name = [filler-pod-6182a37c-a5f6-462d-a40c-b6058e9d3b6e.161c3543607bf862], Reason = [Started], Message = [Started container filler-pod-6182a37c-a5f6-462d-a40c-b6058e9d3b6e] STEP: Considering event: Type = [Normal], Name = [filler-pod-62ee4509-546a-4b57-9446-2694234a7cfd.161c354289a7683a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1479/filler-pod-62ee4509-546a-4b57-9446-2694234a7cfd to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-62ee4509-546a-4b57-9446-2694234a7cfd.161c3542dc4c6819], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-62ee4509-546a-4b57-9446-2694234a7cfd.161c35432e0a325e], Reason = [Created], Message = [Created container filler-pod-62ee4509-546a-4b57-9446-2694234a7cfd] STEP: Considering event: Type = [Normal], Name = [filler-pod-62ee4509-546a-4b57-9446-2694234a7cfd.161c35434aa7534f], Reason = [Started], Message = [Started container filler-pod-62ee4509-546a-4b57-9446-2694234a7cfd] STEP: Considering event: Type = [Warning], Name = [additional-pod.161c3543795bbe0e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:28:53.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1479" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.741 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":48,"skipped":872,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:28:53.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 26 21:28:53.250: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:29:00.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5037" for this suite. • [SLOW TEST:7.579 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":49,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:29:00.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:29:00.892: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-85f88782-f59f-4bd6-b19a-78269681b609" in namespace "security-context-test-4896" to be "success or failure" Jun 26 21:29:01.060: INFO: Pod "busybox-privileged-false-85f88782-f59f-4bd6-b19a-78269681b609": Phase="Pending", Reason="", readiness=false. Elapsed: 167.757125ms Jun 26 21:29:03.064: INFO: Pod "busybox-privileged-false-85f88782-f59f-4bd6-b19a-78269681b609": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171723737s Jun 26 21:29:05.068: INFO: Pod "busybox-privileged-false-85f88782-f59f-4bd6-b19a-78269681b609": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175950934s Jun 26 21:29:05.068: INFO: Pod "busybox-privileged-false-85f88782-f59f-4bd6-b19a-78269681b609" satisfied condition "success or failure" Jun 26 21:29:05.076: INFO: Got logs for pod "busybox-privileged-false-85f88782-f59f-4bd6-b19a-78269681b609": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:29:05.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4896" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":892,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:29:05.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:29:20.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6793" for this suite. STEP: Destroying namespace "nsdeletetest-1418" for this suite. Jun 26 21:29:20.758: INFO: Namespace nsdeletetest-1418 was already deleted STEP: Destroying namespace "nsdeletetest-1392" for this suite. • [SLOW TEST:15.533 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":51,"skipped":897,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:29:20.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:29:20.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750" in namespace "downward-api-7166" to be "success or failure" Jun 26 21:29:20.848: INFO: Pod "downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750": Phase="Pending", Reason="", readiness=false. Elapsed: 9.54545ms Jun 26 21:29:22.852: INFO: Pod "downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013506774s Jun 26 21:29:24.879: INFO: Pod "downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039968423s STEP: Saw pod success Jun 26 21:29:24.879: INFO: Pod "downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750" satisfied condition "success or failure" Jun 26 21:29:24.882: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750 container client-container: STEP: delete the pod Jun 26 21:29:24.898: INFO: Waiting for pod downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750 to disappear Jun 26 21:29:24.949: INFO: Pod downwardapi-volume-617c4c18-cbe4-4b35-aeed-49aa51dc2750 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:29:24.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7166" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":912,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:29:24.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:29:51.365: INFO: Container started at 2020-06-26 21:29:27 +0000 UTC, pod became ready at 2020-06-26 21:29:49 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:29:51.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4042" for this suite. • [SLOW TEST:26.415 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":922,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:29:51.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:29:51.470: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:29:57.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2024" for this suite. • [SLOW TEST:6.471 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":54,"skipped":929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:29:57.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2131 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 21:29:58.100: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 26 21:30:22.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.137:8080/dial?request=hostname&protocol=udp&host=10.244.1.52&port=8081&tries=1'] Namespace:pod-network-test-2131 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 21:30:22.353: INFO: >>> kubeConfig: /root/.kube/config I0626 21:30:22.384286 6 log.go:172] (0xc002ad1130) (0xc001a98b40) Create stream I0626 21:30:22.384318 6 log.go:172] (0xc002ad1130) (0xc001a98b40) Stream added, broadcasting: 1 I0626 21:30:22.387068 6 log.go:172] (0xc002ad1130) Reply frame received for 1 I0626 21:30:22.387128 6 log.go:172] (0xc002ad1130) (0xc001a98be0) Create stream I0626 21:30:22.387159 6 log.go:172] (0xc002ad1130) (0xc001a98be0) Stream added, broadcasting: 3 I0626 21:30:22.388297 6 log.go:172] (0xc002ad1130) Reply frame received for 3 I0626 21:30:22.388353 6 log.go:172] (0xc002ad1130) (0xc0028535e0) Create stream I0626 21:30:22.388370 6 log.go:172] (0xc002ad1130) (0xc0028535e0) Stream added, broadcasting: 5 I0626 21:30:22.389555 6 log.go:172] (0xc002ad1130) Reply frame received for 5 I0626 21:30:22.470388 6 log.go:172] (0xc002ad1130) Data frame received for 3 I0626 21:30:22.470432 6 log.go:172] (0xc001a98be0) (3) Data frame handling I0626 21:30:22.470468 6 log.go:172] (0xc001a98be0) (3) Data frame sent I0626 21:30:22.470930 6 log.go:172] (0xc002ad1130) Data frame received for 3 I0626 21:30:22.470973 6 log.go:172] (0xc001a98be0) (3) Data frame handling I0626 21:30:22.471178 6 log.go:172] (0xc002ad1130) Data frame received for 5 I0626 21:30:22.471251 6 log.go:172] (0xc0028535e0) (5) Data frame handling I0626 21:30:22.473491 6 log.go:172] (0xc002ad1130) Data frame received for 1 I0626 21:30:22.473584 6 log.go:172] (0xc001a98b40) (1) Data frame handling I0626 21:30:22.473632 6 log.go:172] (0xc001a98b40) (1) Data frame sent I0626 21:30:22.473661 6 log.go:172] (0xc002ad1130) (0xc001a98b40) Stream removed, broadcasting: 1 I0626 21:30:22.473712 6 log.go:172] (0xc002ad1130) Go away received I0626 21:30:22.473800 6 log.go:172] (0xc002ad1130) (0xc001a98b40) Stream removed, broadcasting: 1 I0626 21:30:22.473826 6 log.go:172] (0xc002ad1130) (0xc001a98be0) Stream removed, broadcasting: 3 I0626 21:30:22.473859 6 log.go:172] (0xc002ad1130) (0xc0028535e0) Stream removed, broadcasting: 5 Jun 26 21:30:22.474: INFO: Waiting for responses: map[] Jun 26 21:30:22.477: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.137:8080/dial?request=hostname&protocol=udp&host=10.244.2.136&port=8081&tries=1'] Namespace:pod-network-test-2131 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 21:30:22.477: INFO: >>> kubeConfig: /root/.kube/config I0626 21:30:22.512726 6 log.go:172] (0xc002ad1810) (0xc001a98fa0) Create stream I0626 21:30:22.512749 6 log.go:172] (0xc002ad1810) (0xc001a98fa0) Stream added, broadcasting: 1 I0626 21:30:22.514835 6 log.go:172] (0xc002ad1810) Reply frame received for 1 I0626 21:30:22.514869 6 log.go:172] (0xc002ad1810) (0xc0027446e0) Create stream I0626 21:30:22.514891 6 log.go:172] (0xc002ad1810) (0xc0027446e0) Stream added, broadcasting: 3 I0626 21:30:22.516164 6 log.go:172] (0xc002ad1810) Reply frame received for 3 I0626 21:30:22.516215 6 log.go:172] (0xc002ad1810) (0xc002744780) Create stream I0626 21:30:22.516231 6 log.go:172] (0xc002ad1810) (0xc002744780) Stream added, broadcasting: 5 I0626 21:30:22.517734 6 log.go:172] (0xc002ad1810) Reply frame received for 5 I0626 21:30:22.588048 6 log.go:172] (0xc002ad1810) Data frame received for 3 I0626 21:30:22.588135 6 log.go:172] (0xc0027446e0) (3) Data frame handling I0626 21:30:22.588176 6 log.go:172] (0xc0027446e0) (3) Data frame sent I0626 21:30:22.588838 6 log.go:172] (0xc002ad1810) Data frame received for 3 I0626 21:30:22.588874 6 log.go:172] (0xc0027446e0) (3) Data frame handling I0626 21:30:22.588979 6 log.go:172] (0xc002ad1810) Data frame received for 5 I0626 21:30:22.589066 6 log.go:172] (0xc002744780) (5) Data frame handling I0626 21:30:22.590591 6 log.go:172] (0xc002ad1810) Data frame received for 1 I0626 21:30:22.590611 6 log.go:172] (0xc001a98fa0) (1) Data frame handling I0626 21:30:22.590634 6 log.go:172] (0xc001a98fa0) (1) Data frame sent I0626 21:30:22.590644 6 log.go:172] (0xc002ad1810) (0xc001a98fa0) Stream removed, broadcasting: 1 I0626 21:30:22.590654 6 log.go:172] (0xc002ad1810) Go away received I0626 21:30:22.590810 6 log.go:172] (0xc002ad1810) (0xc001a98fa0) Stream removed, broadcasting: 1 I0626 21:30:22.590848 6 log.go:172] (0xc002ad1810) (0xc0027446e0) Stream removed, broadcasting: 3 I0626 21:30:22.590864 6 log.go:172] (0xc002ad1810) (0xc002744780) Stream removed, broadcasting: 5 Jun 26 21:30:22.590: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:30:22.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2131" for this suite. • [SLOW TEST:24.756 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":957,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:30:22.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:30:33.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7001" for this suite. • [SLOW TEST:11.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":56,"skipped":969,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:30:33.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:30:37.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6945" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1005,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:30:37.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jun 26 21:30:37.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6989 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 26 21:30:41.101: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0626 21:30:41.022347 518 log.go:172] (0xc000b500b0) (0xc0005efb80) Create stream\nI0626 21:30:41.022410 518 log.go:172] (0xc000b500b0) (0xc0005efb80) Stream added, broadcasting: 1\nI0626 21:30:41.024856 518 log.go:172] (0xc000b500b0) Reply frame received for 1\nI0626 21:30:41.024904 518 log.go:172] (0xc000b500b0) (0xc000646000) Create stream\nI0626 21:30:41.024925 518 log.go:172] (0xc000b500b0) (0xc000646000) Stream added, broadcasting: 3\nI0626 21:30:41.026034 518 log.go:172] (0xc000b500b0) Reply frame received for 3\nI0626 21:30:41.026073 518 log.go:172] (0xc000b500b0) (0xc0005efc20) Create stream\nI0626 21:30:41.026084 518 log.go:172] (0xc000b500b0) (0xc0005efc20) Stream added, broadcasting: 5\nI0626 21:30:41.027292 518 log.go:172] (0xc000b500b0) Reply frame received for 5\nI0626 21:30:41.027340 518 log.go:172] (0xc000b500b0) (0xc0005efcc0) Create stream\nI0626 21:30:41.027367 518 log.go:172] (0xc000b500b0) (0xc0005efcc0) Stream added, broadcasting: 7\nI0626 21:30:41.028391 518 log.go:172] (0xc000b500b0) Reply frame received for 7\nI0626 21:30:41.028574 518 log.go:172] (0xc000646000) (3) Writing data frame\nI0626 21:30:41.028670 518 log.go:172] (0xc000646000) (3) Writing data frame\nI0626 21:30:41.029857 518 log.go:172] (0xc000b500b0) Data frame received for 5\nI0626 21:30:41.029878 518 log.go:172] (0xc0005efc20) (5) Data frame handling\nI0626 21:30:41.029894 518 log.go:172] (0xc0005efc20) (5) Data frame sent\nI0626 21:30:41.030354 518 log.go:172] (0xc000b500b0) Data frame received for 5\nI0626 21:30:41.030369 518 log.go:172] (0xc0005efc20) (5) Data frame handling\nI0626 21:30:41.030383 518 log.go:172] (0xc0005efc20) (5) Data frame sent\nI0626 21:30:41.071656 518 log.go:172] (0xc000b500b0) Data frame received for 5\nI0626 21:30:41.071743 518 log.go:172] (0xc0005efc20) (5) Data frame handling\nI0626 21:30:41.072215 518 log.go:172] (0xc000b500b0) Data frame received for 1\nI0626 21:30:41.072261 518 log.go:172] (0xc0005efb80) (1) Data frame handling\nI0626 21:30:41.072288 518 log.go:172] (0xc000b500b0) Data frame received for 7\nI0626 21:30:41.072339 518 log.go:172] (0xc0005efcc0) (7) Data frame handling\nI0626 21:30:41.072414 518 log.go:172] (0xc0005efb80) (1) Data frame sent\nI0626 21:30:41.072465 518 log.go:172] (0xc000b500b0) (0xc0005efb80) Stream removed, broadcasting: 1\nI0626 21:30:41.072705 518 log.go:172] (0xc000b500b0) (0xc000646000) Stream removed, broadcasting: 3\nI0626 21:30:41.072932 518 log.go:172] (0xc000b500b0) (0xc0005efb80) Stream removed, broadcasting: 1\nI0626 21:30:41.072960 518 log.go:172] (0xc000b500b0) (0xc000646000) Stream removed, broadcasting: 3\nI0626 21:30:41.072972 518 log.go:172] (0xc000b500b0) (0xc0005efc20) Stream removed, broadcasting: 5\nI0626 21:30:41.073090 518 log.go:172] (0xc000b500b0) Go away received\nI0626 21:30:41.073624 518 log.go:172] (0xc000b500b0) (0xc0005efcc0) Stream removed, broadcasting: 7\n" Jun 26 21:30:41.101: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:30:43.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6989" for this suite. • [SLOW TEST:5.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":58,"skipped":1014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:30:43.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 26 21:30:47.577: INFO: &Pod{ObjectMeta:{send-events-a8264b7b-badc-432a-b6ea-b5122ae5ed73 events-2265 /api/v1/namespaces/events-2265/pods/send-events-a8264b7b-badc-432a-b6ea-b5122ae5ed73 ca05f616-45ac-4445-9ea3-dc49bb8dfceb 27533585 0 2020-06-26 21:30:43 +0000 UTC map[name:foo time:374349638] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gs96k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gs96k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gs96k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:30:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:30:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:30:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:30:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.55,StartTime:2020-06-26 21:30:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:30:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9a7c4d46bede4d480b480b3e673259bd54b93ce014c3bea2c6f1a8cbde62c592,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 26 21:30:49.579: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 26 21:30:51.584: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:30:51.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2265" for this suite. • [SLOW TEST:8.532 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":59,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:30:51.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:30:51.758: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6481 I0626 21:30:51.800958 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6481, replica count: 1 I0626 21:30:52.851386 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:30:53.851616 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:30:54.851860 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:30:55.852097 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 21:30:55.978: INFO: Created: latency-svc-2r4w9 Jun 26 21:30:55.994: INFO: Got endpoints: latency-svc-2r4w9 [42.596516ms] Jun 26 21:30:56.070: INFO: Created: latency-svc-8zxdr Jun 26 21:30:56.084: INFO: Got endpoints: latency-svc-8zxdr [89.250296ms] Jun 26 21:30:56.107: INFO: Created: latency-svc-dv9wr Jun 26 21:30:56.120: INFO: Got endpoints: latency-svc-dv9wr [125.394635ms] Jun 26 21:30:56.146: INFO: Created: latency-svc-v24r9 Jun 26 21:30:56.163: INFO: Got endpoints: latency-svc-v24r9 [167.85266ms] Jun 26 21:30:56.208: INFO: Created: latency-svc-sc69b Jun 26 21:30:56.248: INFO: Got endpoints: latency-svc-sc69b [253.330861ms] Jun 26 21:30:56.291: INFO: Created: latency-svc-c7zdq Jun 26 21:30:56.307: INFO: Got endpoints: latency-svc-c7zdq [312.396285ms] Jun 26 21:30:56.367: INFO: Created: latency-svc-5skmw Jun 26 21:30:56.380: INFO: Got endpoints: latency-svc-5skmw [384.818319ms] Jun 26 21:30:56.403: INFO: Created: latency-svc-mr6kq Jun 26 21:30:56.427: INFO: Got endpoints: latency-svc-mr6kq [432.236934ms] Jun 26 21:30:56.525: INFO: Created: latency-svc-dzzmz Jun 26 21:30:56.529: INFO: Got endpoints: latency-svc-dzzmz [534.258308ms] Jun 26 21:30:56.559: INFO: Created: latency-svc-cqfqw Jun 26 21:30:56.572: INFO: Got endpoints: latency-svc-cqfqw [576.82979ms] Jun 26 21:30:56.616: INFO: Created: latency-svc-g47bd Jun 26 21:30:56.669: INFO: Got endpoints: latency-svc-g47bd [673.91745ms] Jun 26 21:30:56.671: INFO: Created: latency-svc-djg75 Jun 26 21:30:56.680: INFO: Got endpoints: latency-svc-djg75 [685.463226ms] Jun 26 21:30:56.704: INFO: Created: latency-svc-fnrgx Jun 26 21:30:56.716: INFO: Got endpoints: latency-svc-fnrgx [721.210824ms] Jun 26 21:30:56.740: INFO: Created: latency-svc-nsphz Jun 26 21:30:56.752: INFO: Got endpoints: latency-svc-nsphz [757.498394ms] Jun 26 21:30:56.818: INFO: Created: latency-svc-2cjdh Jun 26 21:30:56.847: INFO: Got endpoints: latency-svc-2cjdh [851.802182ms] Jun 26 21:30:56.847: INFO: Created: latency-svc-gvwsc Jun 26 21:30:56.872: INFO: Got endpoints: latency-svc-gvwsc [877.389561ms] Jun 26 21:30:56.902: INFO: Created: latency-svc-r98gj Jun 26 21:30:56.918: INFO: Got endpoints: latency-svc-r98gj [833.62646ms] Jun 26 21:30:56.963: INFO: Created: latency-svc-9mfhj Jun 26 21:30:56.972: INFO: Got endpoints: latency-svc-9mfhj [851.93128ms] Jun 26 21:30:57.010: INFO: Created: latency-svc-2869h Jun 26 21:30:57.026: INFO: Got endpoints: latency-svc-2869h [863.235821ms] Jun 26 21:30:57.112: INFO: Created: latency-svc-z2265 Jun 26 21:30:57.128: INFO: Got endpoints: latency-svc-z2265 [879.423172ms] Jun 26 21:30:57.160: INFO: Created: latency-svc-j4zd7 Jun 26 21:30:57.176: INFO: Got endpoints: latency-svc-j4zd7 [868.865387ms] Jun 26 21:30:57.238: INFO: Created: latency-svc-6j986 Jun 26 21:30:57.248: INFO: Got endpoints: latency-svc-6j986 [868.486531ms] Jun 26 21:30:57.273: INFO: Created: latency-svc-vtvsk Jun 26 21:30:57.284: INFO: Got endpoints: latency-svc-vtvsk [857.629129ms] Jun 26 21:30:57.322: INFO: Created: latency-svc-5wf5w Jun 26 21:30:57.366: INFO: Got endpoints: latency-svc-5wf5w [837.341394ms] Jun 26 21:30:57.400: INFO: Created: latency-svc-s8kvv Jun 26 21:30:57.428: INFO: Got endpoints: latency-svc-s8kvv [856.762553ms] Jun 26 21:30:57.513: INFO: Created: latency-svc-jc66x Jun 26 21:30:57.538: INFO: Got endpoints: latency-svc-jc66x [869.135156ms] Jun 26 21:30:57.539: INFO: Created: latency-svc-2lmk9 Jun 26 21:30:57.555: INFO: Got endpoints: latency-svc-2lmk9 [875.223032ms] Jun 26 21:30:57.580: INFO: Created: latency-svc-spgxh Jun 26 21:30:57.592: INFO: Got endpoints: latency-svc-spgxh [875.546357ms] Jun 26 21:30:57.610: INFO: Created: latency-svc-l9jt8 Jun 26 21:30:57.663: INFO: Got endpoints: latency-svc-l9jt8 [911.055166ms] Jun 26 21:30:57.664: INFO: Created: latency-svc-hl4xc Jun 26 21:30:57.677: INFO: Got endpoints: latency-svc-hl4xc [829.927036ms] Jun 26 21:30:57.717: INFO: Created: latency-svc-qxfqz Jun 26 21:30:57.731: INFO: Got endpoints: latency-svc-qxfqz [858.353335ms] Jun 26 21:30:57.754: INFO: Created: latency-svc-f7wmm Jun 26 21:30:57.819: INFO: Got endpoints: latency-svc-f7wmm [901.242216ms] Jun 26 21:30:57.821: INFO: Created: latency-svc-n8vw9 Jun 26 21:30:57.827: INFO: Got endpoints: latency-svc-n8vw9 [855.08938ms] Jun 26 21:30:57.849: INFO: Created: latency-svc-2f46h Jun 26 21:30:57.864: INFO: Got endpoints: latency-svc-2f46h [838.109921ms] Jun 26 21:30:57.909: INFO: Created: latency-svc-tqx7n Jun 26 21:30:57.962: INFO: Got endpoints: latency-svc-tqx7n [834.724592ms] Jun 26 21:30:57.976: INFO: Created: latency-svc-zcb6r Jun 26 21:30:57.993: INFO: Got endpoints: latency-svc-zcb6r [817.363933ms] Jun 26 21:30:58.012: INFO: Created: latency-svc-22bbt Jun 26 21:30:58.032: INFO: Got endpoints: latency-svc-22bbt [784.089932ms] Jun 26 21:30:58.107: INFO: Created: latency-svc-bff6d Jun 26 21:30:58.111: INFO: Got endpoints: latency-svc-bff6d [826.111068ms] Jun 26 21:30:58.149: INFO: Created: latency-svc-twtgv Jun 26 21:30:58.165: INFO: Got endpoints: latency-svc-twtgv [798.958237ms] Jun 26 21:30:58.192: INFO: Created: latency-svc-dgnf5 Jun 26 21:30:58.274: INFO: Got endpoints: latency-svc-dgnf5 [845.922294ms] Jun 26 21:30:58.318: INFO: Created: latency-svc-c6qj2 Jun 26 21:30:58.334: INFO: Got endpoints: latency-svc-c6qj2 [795.767319ms] Jun 26 21:30:58.360: INFO: Created: latency-svc-dlvgx Jun 26 21:30:58.418: INFO: Got endpoints: latency-svc-dlvgx [862.324029ms] Jun 26 21:30:58.449: INFO: Created: latency-svc-bvvvz Jun 26 21:30:58.484: INFO: Got endpoints: latency-svc-bvvvz [892.687313ms] Jun 26 21:30:58.580: INFO: Created: latency-svc-z4h4s Jun 26 21:30:58.582: INFO: Got endpoints: latency-svc-z4h4s [918.972979ms] Jun 26 21:30:58.624: INFO: Created: latency-svc-8m9vz Jun 26 21:30:58.640: INFO: Got endpoints: latency-svc-8m9vz [963.796824ms] Jun 26 21:30:58.659: INFO: Created: latency-svc-mw9z8 Jun 26 21:30:58.670: INFO: Got endpoints: latency-svc-mw9z8 [939.734943ms] Jun 26 21:30:58.729: INFO: Created: latency-svc-j8hmd Jun 26 21:30:58.737: INFO: Got endpoints: latency-svc-j8hmd [918.350786ms] Jun 26 21:30:58.768: INFO: Created: latency-svc-6tqkw Jun 26 21:30:58.788: INFO: Got endpoints: latency-svc-6tqkw [960.948248ms] Jun 26 21:30:58.816: INFO: Created: latency-svc-v8vkr Jun 26 21:30:58.878: INFO: Got endpoints: latency-svc-v8vkr [1.014269884s] Jun 26 21:30:58.881: INFO: Created: latency-svc-48nj4 Jun 26 21:30:58.888: INFO: Got endpoints: latency-svc-48nj4 [925.203345ms] Jun 26 21:30:58.917: INFO: Created: latency-svc-xgp6h Jun 26 21:30:58.931: INFO: Got endpoints: latency-svc-xgp6h [937.785486ms] Jun 26 21:30:58.953: INFO: Created: latency-svc-mm7bc Jun 26 21:30:58.966: INFO: Got endpoints: latency-svc-mm7bc [934.019383ms] Jun 26 21:30:59.082: INFO: Created: latency-svc-snlw6 Jun 26 21:30:59.105: INFO: Got endpoints: latency-svc-snlw6 [994.154655ms] Jun 26 21:30:59.181: INFO: Created: latency-svc-24ctg Jun 26 21:30:59.286: INFO: Got endpoints: latency-svc-24ctg [1.12015682s] Jun 26 21:30:59.287: INFO: Created: latency-svc-72fqb Jun 26 21:30:59.315: INFO: Got endpoints: latency-svc-72fqb [1.040226123s] Jun 26 21:30:59.355: INFO: Created: latency-svc-gl8b5 Jun 26 21:30:59.369: INFO: Got endpoints: latency-svc-gl8b5 [1.035533491s] Jun 26 21:30:59.473: INFO: Created: latency-svc-65sw4 Jun 26 21:30:59.478: INFO: Got endpoints: latency-svc-65sw4 [1.060380836s] Jun 26 21:30:59.530: INFO: Created: latency-svc-9cqc6 Jun 26 21:30:59.615: INFO: Got endpoints: latency-svc-9cqc6 [1.130611839s] Jun 26 21:30:59.681: INFO: Created: latency-svc-ncpb9 Jun 26 21:30:59.691: INFO: Got endpoints: latency-svc-ncpb9 [1.108289508s] Jun 26 21:30:59.835: INFO: Created: latency-svc-gcq7b Jun 26 21:30:59.980: INFO: Got endpoints: latency-svc-gcq7b [1.340001484s] Jun 26 21:30:59.998: INFO: Created: latency-svc-pftlg Jun 26 21:31:00.027: INFO: Got endpoints: latency-svc-pftlg [1.356831354s] Jun 26 21:31:00.065: INFO: Created: latency-svc-krqh5 Jun 26 21:31:00.136: INFO: Got endpoints: latency-svc-krqh5 [1.398710825s] Jun 26 21:31:00.148: INFO: Created: latency-svc-ftld2 Jun 26 21:31:00.165: INFO: Got endpoints: latency-svc-ftld2 [1.377368049s] Jun 26 21:31:00.190: INFO: Created: latency-svc-f8rld Jun 26 21:31:00.201: INFO: Got endpoints: latency-svc-f8rld [1.322742908s] Jun 26 21:31:00.280: INFO: Created: latency-svc-dkgdg Jun 26 21:31:00.283: INFO: Got endpoints: latency-svc-dkgdg [1.395489621s] Jun 26 21:31:00.309: INFO: Created: latency-svc-882zk Jun 26 21:31:00.322: INFO: Got endpoints: latency-svc-882zk [1.390608744s] Jun 26 21:31:00.352: INFO: Created: latency-svc-29fmj Jun 26 21:31:00.417: INFO: Got endpoints: latency-svc-29fmj [1.450648402s] Jun 26 21:31:00.423: INFO: Created: latency-svc-mb9xt Jun 26 21:31:00.443: INFO: Got endpoints: latency-svc-mb9xt [1.337910418s] Jun 26 21:31:00.484: INFO: Created: latency-svc-vmw4x Jun 26 21:31:00.496: INFO: Got endpoints: latency-svc-vmw4x [1.210569841s] Jun 26 21:31:00.561: INFO: Created: latency-svc-tbrj2 Jun 26 21:31:00.591: INFO: Got endpoints: latency-svc-tbrj2 [1.276319805s] Jun 26 21:31:00.592: INFO: Created: latency-svc-rzrh5 Jun 26 21:31:00.605: INFO: Got endpoints: latency-svc-rzrh5 [1.235643698s] Jun 26 21:31:00.629: INFO: Created: latency-svc-vrg8f Jun 26 21:31:00.642: INFO: Got endpoints: latency-svc-vrg8f [1.163555388s] Jun 26 21:31:00.706: INFO: Created: latency-svc-sxflp Jun 26 21:31:00.708: INFO: Got endpoints: latency-svc-sxflp [1.092964475s] Jun 26 21:31:00.735: INFO: Created: latency-svc-hh4tl Jun 26 21:31:00.750: INFO: Got endpoints: latency-svc-hh4tl [1.058989713s] Jun 26 21:31:00.771: INFO: Created: latency-svc-frpcj Jun 26 21:31:00.787: INFO: Got endpoints: latency-svc-frpcj [806.178173ms] Jun 26 21:31:00.837: INFO: Created: latency-svc-mmg89 Jun 26 21:31:00.852: INFO: Got endpoints: latency-svc-mmg89 [825.129749ms] Jun 26 21:31:00.881: INFO: Created: latency-svc-zt5mw Jun 26 21:31:00.977: INFO: Got endpoints: latency-svc-zt5mw [840.802261ms] Jun 26 21:31:01.035: INFO: Created: latency-svc-bj8x6 Jun 26 21:31:01.061: INFO: Got endpoints: latency-svc-bj8x6 [895.531157ms] Jun 26 21:31:01.122: INFO: Created: latency-svc-2tzdq Jun 26 21:31:01.167: INFO: Got endpoints: latency-svc-2tzdq [966.023078ms] Jun 26 21:31:01.218: INFO: Created: latency-svc-4qldd Jun 26 21:31:01.267: INFO: Got endpoints: latency-svc-4qldd [984.010791ms] Jun 26 21:31:01.283: INFO: Created: latency-svc-hp6cc Jun 26 21:31:01.297: INFO: Got endpoints: latency-svc-hp6cc [975.495512ms] Jun 26 21:31:01.318: INFO: Created: latency-svc-9tj2c Jun 26 21:31:01.334: INFO: Got endpoints: latency-svc-9tj2c [916.707954ms] Jun 26 21:31:01.418: INFO: Created: latency-svc-86494 Jun 26 21:31:01.421: INFO: Got endpoints: latency-svc-86494 [978.040208ms] Jun 26 21:31:01.468: INFO: Created: latency-svc-qqbd9 Jun 26 21:31:01.485: INFO: Got endpoints: latency-svc-qqbd9 [988.773507ms] Jun 26 21:31:01.510: INFO: Created: latency-svc-5zs69 Jun 26 21:31:01.546: INFO: Got endpoints: latency-svc-5zs69 [954.623936ms] Jun 26 21:31:01.559: INFO: Created: latency-svc-6xrrd Jun 26 21:31:01.574: INFO: Got endpoints: latency-svc-6xrrd [969.2163ms] Jun 26 21:31:01.595: INFO: Created: latency-svc-xrmmv Jun 26 21:31:01.604: INFO: Got endpoints: latency-svc-xrmmv [962.669184ms] Jun 26 21:31:01.631: INFO: Created: latency-svc-qzx8k Jun 26 21:31:01.641: INFO: Got endpoints: latency-svc-qzx8k [933.150898ms] Jun 26 21:31:01.687: INFO: Created: latency-svc-4kzt8 Jun 26 21:31:01.689: INFO: Got endpoints: latency-svc-4kzt8 [939.656489ms] Jun 26 21:31:01.720: INFO: Created: latency-svc-xlvz5 Jun 26 21:31:01.743: INFO: Got endpoints: latency-svc-xlvz5 [956.612583ms] Jun 26 21:31:01.781: INFO: Created: latency-svc-8xwv8 Jun 26 21:31:01.836: INFO: Got endpoints: latency-svc-8xwv8 [983.974808ms] Jun 26 21:31:01.839: INFO: Created: latency-svc-pglmj Jun 26 21:31:01.852: INFO: Got endpoints: latency-svc-pglmj [875.029074ms] Jun 26 21:31:01.882: INFO: Created: latency-svc-pt4qn Jun 26 21:31:01.894: INFO: Got endpoints: latency-svc-pt4qn [833.317312ms] Jun 26 21:31:01.918: INFO: Created: latency-svc-wlrx2 Jun 26 21:31:01.932: INFO: Got endpoints: latency-svc-wlrx2 [764.389372ms] Jun 26 21:31:01.991: INFO: Created: latency-svc-wp2bx Jun 26 21:31:02.040: INFO: Got endpoints: latency-svc-wp2bx [772.336297ms] Jun 26 21:31:02.106: INFO: Created: latency-svc-xfrvt Jun 26 21:31:02.140: INFO: Got endpoints: latency-svc-xfrvt [842.730169ms] Jun 26 21:31:02.226: INFO: Created: latency-svc-5kjgw Jun 26 21:31:02.237: INFO: Got endpoints: latency-svc-5kjgw [903.576428ms] Jun 26 21:31:02.260: INFO: Created: latency-svc-tx275 Jun 26 21:31:02.298: INFO: Got endpoints: latency-svc-tx275 [876.728568ms] Jun 26 21:31:02.363: INFO: Created: latency-svc-fjmgw Jun 26 21:31:02.366: INFO: Got endpoints: latency-svc-fjmgw [880.876043ms] Jun 26 21:31:02.394: INFO: Created: latency-svc-jrmxd Jun 26 21:31:02.407: INFO: Got endpoints: latency-svc-jrmxd [860.889986ms] Jun 26 21:31:02.452: INFO: Created: latency-svc-mxrw2 Jun 26 21:31:02.525: INFO: Got endpoints: latency-svc-mxrw2 [950.654175ms] Jun 26 21:31:02.528: INFO: Created: latency-svc-b8jpt Jun 26 21:31:02.532: INFO: Got endpoints: latency-svc-b8jpt [927.421222ms] Jun 26 21:31:02.567: INFO: Created: latency-svc-cnn42 Jun 26 21:31:02.581: INFO: Got endpoints: latency-svc-cnn42 [939.971741ms] Jun 26 21:31:02.603: INFO: Created: latency-svc-k5gxc Jun 26 21:31:02.617: INFO: Got endpoints: latency-svc-k5gxc [927.783853ms] Jun 26 21:31:02.681: INFO: Created: latency-svc-pv89n Jun 26 21:31:02.707: INFO: Got endpoints: latency-svc-pv89n [963.832988ms] Jun 26 21:31:02.741: INFO: Created: latency-svc-hrwn6 Jun 26 21:31:02.755: INFO: Got endpoints: latency-svc-hrwn6 [918.511426ms] Jun 26 21:31:02.778: INFO: Created: latency-svc-jrq7d Jun 26 21:31:02.878: INFO: Got endpoints: latency-svc-jrq7d [1.026380008s] Jun 26 21:31:02.880: INFO: Created: latency-svc-2nbr5 Jun 26 21:31:02.894: INFO: Got endpoints: latency-svc-2nbr5 [999.527297ms] Jun 26 21:31:02.928: INFO: Created: latency-svc-vw9x6 Jun 26 21:31:02.942: INFO: Got endpoints: latency-svc-vw9x6 [1.00998218s] Jun 26 21:31:02.963: INFO: Created: latency-svc-xdf48 Jun 26 21:31:03.083: INFO: Got endpoints: latency-svc-xdf48 [1.043175779s] Jun 26 21:31:03.085: INFO: Created: latency-svc-2bgqv Jun 26 21:31:03.119: INFO: Got endpoints: latency-svc-2bgqv [978.986152ms] Jun 26 21:31:03.150: INFO: Created: latency-svc-449dp Jun 26 21:31:03.172: INFO: Got endpoints: latency-svc-449dp [935.00863ms] Jun 26 21:31:03.232: INFO: Created: latency-svc-frwmg Jun 26 21:31:03.235: INFO: Got endpoints: latency-svc-frwmg [937.173895ms] Jun 26 21:31:03.256: INFO: Created: latency-svc-ljgzt Jun 26 21:31:03.274: INFO: Got endpoints: latency-svc-ljgzt [907.707132ms] Jun 26 21:31:03.293: INFO: Created: latency-svc-jd7bh Jun 26 21:31:03.310: INFO: Got endpoints: latency-svc-jd7bh [903.142422ms] Jun 26 21:31:03.329: INFO: Created: latency-svc-hb7xq Jun 26 21:31:03.375: INFO: Got endpoints: latency-svc-hb7xq [850.291254ms] Jun 26 21:31:03.388: INFO: Created: latency-svc-w66wp Jun 26 21:31:03.402: INFO: Got endpoints: latency-svc-w66wp [870.33826ms] Jun 26 21:31:03.424: INFO: Created: latency-svc-mmxcs Jun 26 21:31:03.436: INFO: Got endpoints: latency-svc-mmxcs [854.806733ms] Jun 26 21:31:03.455: INFO: Created: latency-svc-x8jll Jun 26 21:31:03.467: INFO: Got endpoints: latency-svc-x8jll [849.556162ms] Jun 26 21:31:03.507: INFO: Created: latency-svc-dfpqk Jun 26 21:31:03.533: INFO: Got endpoints: latency-svc-dfpqk [825.880665ms] Jun 26 21:31:03.534: INFO: Created: latency-svc-vf8tk Jun 26 21:31:03.545: INFO: Got endpoints: latency-svc-vf8tk [789.985133ms] Jun 26 21:31:03.563: INFO: Created: latency-svc-779x8 Jun 26 21:31:03.575: INFO: Got endpoints: latency-svc-779x8 [696.785452ms] Jun 26 21:31:03.598: INFO: Created: latency-svc-rqjzk Jun 26 21:31:03.657: INFO: Got endpoints: latency-svc-rqjzk [762.86534ms] Jun 26 21:31:03.659: INFO: Created: latency-svc-75w2f Jun 26 21:31:03.677: INFO: Got endpoints: latency-svc-75w2f [735.381459ms] Jun 26 21:31:03.708: INFO: Created: latency-svc-6dwb8 Jun 26 21:31:03.720: INFO: Got endpoints: latency-svc-6dwb8 [637.314975ms] Jun 26 21:31:03.738: INFO: Created: latency-svc-qpvtz Jun 26 21:31:03.750: INFO: Got endpoints: latency-svc-qpvtz [631.16983ms] Jun 26 21:31:03.807: INFO: Created: latency-svc-2m5rh Jun 26 21:31:03.810: INFO: Got endpoints: latency-svc-2m5rh [637.685461ms] Jun 26 21:31:03.839: INFO: Created: latency-svc-6dvtl Jun 26 21:31:03.853: INFO: Got endpoints: latency-svc-6dvtl [618.000242ms] Jun 26 21:31:03.894: INFO: Created: latency-svc-q2xbq Jun 26 21:31:03.968: INFO: Got endpoints: latency-svc-q2xbq [157.684378ms] Jun 26 21:31:03.969: INFO: Created: latency-svc-5xnht Jun 26 21:31:03.988: INFO: Got endpoints: latency-svc-5xnht [714.013118ms] Jun 26 21:31:04.030: INFO: Created: latency-svc-g5q7f Jun 26 21:31:04.046: INFO: Got endpoints: latency-svc-g5q7f [736.085417ms] Jun 26 21:31:04.068: INFO: Created: latency-svc-q9jlg Jun 26 21:31:04.142: INFO: Got endpoints: latency-svc-q9jlg [766.322899ms] Jun 26 21:31:04.145: INFO: Created: latency-svc-db5rg Jun 26 21:31:04.154: INFO: Got endpoints: latency-svc-db5rg [751.562195ms] Jun 26 21:31:04.187: INFO: Created: latency-svc-rx5zt Jun 26 21:31:04.215: INFO: Got endpoints: latency-svc-rx5zt [778.513204ms] Jun 26 21:31:04.280: INFO: Created: latency-svc-znjcc Jun 26 21:31:04.308: INFO: Created: latency-svc-2q7sx Jun 26 21:31:04.308: INFO: Got endpoints: latency-svc-znjcc [841.307541ms] Jun 26 21:31:04.317: INFO: Got endpoints: latency-svc-2q7sx [783.770514ms] Jun 26 21:31:04.344: INFO: Created: latency-svc-mgp42 Jun 26 21:31:04.367: INFO: Got endpoints: latency-svc-mgp42 [821.775853ms] Jun 26 21:31:04.429: INFO: Created: latency-svc-k5q45 Jun 26 21:31:04.432: INFO: Got endpoints: latency-svc-k5q45 [856.4469ms] Jun 26 21:31:04.458: INFO: Created: latency-svc-f4ls8 Jun 26 21:31:04.468: INFO: Got endpoints: latency-svc-f4ls8 [810.718289ms] Jun 26 21:31:04.506: INFO: Created: latency-svc-8xnb8 Jun 26 21:31:04.585: INFO: Got endpoints: latency-svc-8xnb8 [907.954417ms] Jun 26 21:31:04.587: INFO: Created: latency-svc-dv69n Jun 26 21:31:04.607: INFO: Got endpoints: latency-svc-dv69n [886.541319ms] Jun 26 21:31:04.624: INFO: Created: latency-svc-dhhsr Jun 26 21:31:04.637: INFO: Got endpoints: latency-svc-dhhsr [886.581113ms] Jun 26 21:31:04.656: INFO: Created: latency-svc-t62s5 Jun 26 21:31:04.667: INFO: Got endpoints: latency-svc-t62s5 [813.666475ms] Jun 26 21:31:04.736: INFO: Created: latency-svc-hqx4d Jun 26 21:31:04.738: INFO: Got endpoints: latency-svc-hqx4d [770.310041ms] Jun 26 21:31:04.762: INFO: Created: latency-svc-l25hg Jun 26 21:31:04.775: INFO: Got endpoints: latency-svc-l25hg [787.409891ms] Jun 26 21:31:04.792: INFO: Created: latency-svc-227jn Jun 26 21:31:04.806: INFO: Got endpoints: latency-svc-227jn [759.801563ms] Jun 26 21:31:04.823: INFO: Created: latency-svc-lqsm6 Jun 26 21:31:04.884: INFO: Got endpoints: latency-svc-lqsm6 [742.455115ms] Jun 26 21:31:04.892: INFO: Created: latency-svc-9h9vc Jun 26 21:31:04.896: INFO: Got endpoints: latency-svc-9h9vc [741.970504ms] Jun 26 21:31:04.912: INFO: Created: latency-svc-v4vpt Jun 26 21:31:04.927: INFO: Got endpoints: latency-svc-v4vpt [711.799891ms] Jun 26 21:31:04.948: INFO: Created: latency-svc-gkmt2 Jun 26 21:31:04.963: INFO: Got endpoints: latency-svc-gkmt2 [654.670621ms] Jun 26 21:31:04.984: INFO: Created: latency-svc-v598p Jun 26 21:31:05.040: INFO: Got endpoints: latency-svc-v598p [723.076285ms] Jun 26 21:31:05.042: INFO: Created: latency-svc-tvwhq Jun 26 21:31:05.077: INFO: Got endpoints: latency-svc-tvwhq [709.896144ms] Jun 26 21:31:05.123: INFO: Created: latency-svc-g7wld Jun 26 21:31:05.139: INFO: Got endpoints: latency-svc-g7wld [706.936712ms] Jun 26 21:31:05.178: INFO: Created: latency-svc-jtnbs Jun 26 21:31:05.181: INFO: Got endpoints: latency-svc-jtnbs [712.813679ms] Jun 26 21:31:05.214: INFO: Created: latency-svc-n2ljp Jun 26 21:31:05.235: INFO: Got endpoints: latency-svc-n2ljp [649.22275ms] Jun 26 21:31:05.256: INFO: Created: latency-svc-lqwzb Jun 26 21:31:05.264: INFO: Got endpoints: latency-svc-lqwzb [657.519342ms] Jun 26 21:31:05.310: INFO: Created: latency-svc-gppt8 Jun 26 21:31:05.312: INFO: Got endpoints: latency-svc-gppt8 [675.214524ms] Jun 26 21:31:05.375: INFO: Created: latency-svc-d9wmb Jun 26 21:31:05.400: INFO: Got endpoints: latency-svc-d9wmb [733.101984ms] Jun 26 21:31:05.454: INFO: Created: latency-svc-knvfq Jun 26 21:31:05.457: INFO: Got endpoints: latency-svc-knvfq [719.010547ms] Jun 26 21:31:05.482: INFO: Created: latency-svc-lf2p5 Jun 26 21:31:05.499: INFO: Got endpoints: latency-svc-lf2p5 [723.857944ms] Jun 26 21:31:05.518: INFO: Created: latency-svc-xz2tr Jun 26 21:31:05.536: INFO: Got endpoints: latency-svc-xz2tr [729.713009ms] Jun 26 21:31:05.594: INFO: Created: latency-svc-pzlxw Jun 26 21:31:05.594: INFO: Got endpoints: latency-svc-pzlxw [709.927025ms] Jun 26 21:31:05.615: INFO: Created: latency-svc-njcw9 Jun 26 21:31:05.633: INFO: Got endpoints: latency-svc-njcw9 [736.817636ms] Jun 26 21:31:05.664: INFO: Created: latency-svc-dlp76 Jun 26 21:31:05.668: INFO: Got endpoints: latency-svc-dlp76 [741.825812ms] Jun 26 21:31:05.729: INFO: Created: latency-svc-grrr4 Jun 26 21:31:05.732: INFO: Got endpoints: latency-svc-grrr4 [769.060307ms] Jun 26 21:31:05.783: INFO: Created: latency-svc-wl8cq Jun 26 21:31:05.808: INFO: Got endpoints: latency-svc-wl8cq [767.829507ms] Jun 26 21:31:05.860: INFO: Created: latency-svc-qvw84 Jun 26 21:31:05.873: INFO: Got endpoints: latency-svc-qvw84 [796.312608ms] Jun 26 21:31:05.890: INFO: Created: latency-svc-kgscc Jun 26 21:31:05.903: INFO: Got endpoints: latency-svc-kgscc [764.743745ms] Jun 26 21:31:05.920: INFO: Created: latency-svc-srggj Jun 26 21:31:05.934: INFO: Got endpoints: latency-svc-srggj [752.863282ms] Jun 26 21:31:06.011: INFO: Created: latency-svc-f7wdp Jun 26 21:31:06.014: INFO: Got endpoints: latency-svc-f7wdp [779.647481ms] Jun 26 21:31:06.089: INFO: Created: latency-svc-5pq9t Jun 26 21:31:06.130: INFO: Got endpoints: latency-svc-5pq9t [865.303377ms] Jun 26 21:31:06.168: INFO: Created: latency-svc-ls4pn Jun 26 21:31:06.192: INFO: Got endpoints: latency-svc-ls4pn [879.244159ms] Jun 26 21:31:06.216: INFO: Created: latency-svc-tsprr Jun 26 21:31:06.262: INFO: Got endpoints: latency-svc-tsprr [861.790207ms] Jun 26 21:31:06.274: INFO: Created: latency-svc-8qzbw Jun 26 21:31:06.289: INFO: Got endpoints: latency-svc-8qzbw [831.211446ms] Jun 26 21:31:06.329: INFO: Created: latency-svc-lph7g Jun 26 21:31:06.361: INFO: Got endpoints: latency-svc-lph7g [861.897872ms] Jun 26 21:31:06.408: INFO: Created: latency-svc-7d5k5 Jun 26 21:31:06.412: INFO: Got endpoints: latency-svc-7d5k5 [876.482021ms] Jun 26 21:31:06.490: INFO: Created: latency-svc-c89c8 Jun 26 21:31:06.531: INFO: Got endpoints: latency-svc-c89c8 [936.953076ms] Jun 26 21:31:06.564: INFO: Created: latency-svc-f7fzw Jun 26 21:31:06.590: INFO: Got endpoints: latency-svc-f7fzw [957.116866ms] Jun 26 21:31:06.693: INFO: Created: latency-svc-4g2p7 Jun 26 21:31:06.712: INFO: Got endpoints: latency-svc-4g2p7 [1.043861646s] Jun 26 21:31:06.743: INFO: Created: latency-svc-2ndxw Jun 26 21:31:06.758: INFO: Got endpoints: latency-svc-2ndxw [1.026054297s] Jun 26 21:31:06.779: INFO: Created: latency-svc-ldwj8 Jun 26 21:31:06.813: INFO: Got endpoints: latency-svc-ldwj8 [1.004568634s] Jun 26 21:31:06.827: INFO: Created: latency-svc-5fqbw Jun 26 21:31:06.842: INFO: Got endpoints: latency-svc-5fqbw [969.172434ms] Jun 26 21:31:06.864: INFO: Created: latency-svc-6pfxv Jun 26 21:31:06.873: INFO: Got endpoints: latency-svc-6pfxv [969.457376ms] Jun 26 21:31:06.892: INFO: Created: latency-svc-g6wtx Jun 26 21:31:06.909: INFO: Got endpoints: latency-svc-g6wtx [975.612514ms] Jun 26 21:31:06.946: INFO: Created: latency-svc-v49rn Jun 26 21:31:06.964: INFO: Got endpoints: latency-svc-v49rn [949.542868ms] Jun 26 21:31:06.983: INFO: Created: latency-svc-pcnvm Jun 26 21:31:07.000: INFO: Got endpoints: latency-svc-pcnvm [870.489954ms] Jun 26 21:31:07.026: INFO: Created: latency-svc-qnlgh Jun 26 21:31:07.082: INFO: Got endpoints: latency-svc-qnlgh [890.114822ms] Jun 26 21:31:07.115: INFO: Created: latency-svc-kzp4m Jun 26 21:31:07.144: INFO: Got endpoints: latency-svc-kzp4m [882.358094ms] Jun 26 21:31:07.220: INFO: Created: latency-svc-jvlls Jun 26 21:31:07.223: INFO: Got endpoints: latency-svc-jvlls [933.85568ms] Jun 26 21:31:07.254: INFO: Created: latency-svc-xs86t Jun 26 21:31:07.264: INFO: Got endpoints: latency-svc-xs86t [903.17593ms] Jun 26 21:31:07.370: INFO: Created: latency-svc-m69zb Jun 26 21:31:07.379: INFO: Got endpoints: latency-svc-m69zb [966.90328ms] Jun 26 21:31:07.440: INFO: Created: latency-svc-4wnnl Jun 26 21:31:07.451: INFO: Got endpoints: latency-svc-4wnnl [919.637242ms] Jun 26 21:31:07.525: INFO: Created: latency-svc-sr5v2 Jun 26 21:31:07.528: INFO: Got endpoints: latency-svc-sr5v2 [937.945768ms] Jun 26 21:31:07.559: INFO: Created: latency-svc-ncqpb Jun 26 21:31:07.572: INFO: Got endpoints: latency-svc-ncqpb [859.121272ms] Jun 26 21:31:07.594: INFO: Created: latency-svc-bcgwh Jun 26 21:31:07.608: INFO: Got endpoints: latency-svc-bcgwh [850.127037ms] Jun 26 21:31:07.681: INFO: Created: latency-svc-fxhgt Jun 26 21:31:07.704: INFO: Got endpoints: latency-svc-fxhgt [890.75441ms] Jun 26 21:31:07.705: INFO: Created: latency-svc-ftkpb Jun 26 21:31:07.716: INFO: Got endpoints: latency-svc-ftkpb [873.278103ms] Jun 26 21:31:07.734: INFO: Created: latency-svc-mnds7 Jun 26 21:31:07.746: INFO: Got endpoints: latency-svc-mnds7 [873.375312ms] Jun 26 21:31:07.768: INFO: Created: latency-svc-65d2d Jun 26 21:31:07.806: INFO: Got endpoints: latency-svc-65d2d [896.978357ms] Jun 26 21:31:07.828: INFO: Created: latency-svc-r972t Jun 26 21:31:07.843: INFO: Got endpoints: latency-svc-r972t [879.303434ms] Jun 26 21:31:07.864: INFO: Created: latency-svc-2d2bs Jun 26 21:31:07.901: INFO: Got endpoints: latency-svc-2d2bs [901.138837ms] Jun 26 21:31:07.901: INFO: Latencies: [89.250296ms 125.394635ms 157.684378ms 167.85266ms 253.330861ms 312.396285ms 384.818319ms 432.236934ms 534.258308ms 576.82979ms 618.000242ms 631.16983ms 637.314975ms 637.685461ms 649.22275ms 654.670621ms 657.519342ms 673.91745ms 675.214524ms 685.463226ms 696.785452ms 706.936712ms 709.896144ms 709.927025ms 711.799891ms 712.813679ms 714.013118ms 719.010547ms 721.210824ms 723.076285ms 723.857944ms 729.713009ms 733.101984ms 735.381459ms 736.085417ms 736.817636ms 741.825812ms 741.970504ms 742.455115ms 751.562195ms 752.863282ms 757.498394ms 759.801563ms 762.86534ms 764.389372ms 764.743745ms 766.322899ms 767.829507ms 769.060307ms 770.310041ms 772.336297ms 778.513204ms 779.647481ms 783.770514ms 784.089932ms 787.409891ms 789.985133ms 795.767319ms 796.312608ms 798.958237ms 806.178173ms 810.718289ms 813.666475ms 817.363933ms 821.775853ms 825.129749ms 825.880665ms 826.111068ms 829.927036ms 831.211446ms 833.317312ms 833.62646ms 834.724592ms 837.341394ms 838.109921ms 840.802261ms 841.307541ms 842.730169ms 845.922294ms 849.556162ms 850.127037ms 850.291254ms 851.802182ms 851.93128ms 854.806733ms 855.08938ms 856.4469ms 856.762553ms 857.629129ms 858.353335ms 859.121272ms 860.889986ms 861.790207ms 861.897872ms 862.324029ms 863.235821ms 865.303377ms 868.486531ms 868.865387ms 869.135156ms 870.33826ms 870.489954ms 873.278103ms 873.375312ms 875.029074ms 875.223032ms 875.546357ms 876.482021ms 876.728568ms 877.389561ms 879.244159ms 879.303434ms 879.423172ms 880.876043ms 882.358094ms 886.541319ms 886.581113ms 890.114822ms 890.75441ms 892.687313ms 895.531157ms 896.978357ms 901.138837ms 901.242216ms 903.142422ms 903.17593ms 903.576428ms 907.707132ms 907.954417ms 911.055166ms 916.707954ms 918.350786ms 918.511426ms 918.972979ms 919.637242ms 925.203345ms 927.421222ms 927.783853ms 933.150898ms 933.85568ms 934.019383ms 935.00863ms 936.953076ms 937.173895ms 937.785486ms 937.945768ms 939.656489ms 939.734943ms 939.971741ms 949.542868ms 950.654175ms 954.623936ms 956.612583ms 957.116866ms 960.948248ms 962.669184ms 963.796824ms 963.832988ms 966.023078ms 966.90328ms 969.172434ms 969.2163ms 969.457376ms 975.495512ms 975.612514ms 978.040208ms 978.986152ms 983.974808ms 984.010791ms 988.773507ms 994.154655ms 999.527297ms 1.004568634s 1.00998218s 1.014269884s 1.026054297s 1.026380008s 1.035533491s 1.040226123s 1.043175779s 1.043861646s 1.058989713s 1.060380836s 1.092964475s 1.108289508s 1.12015682s 1.130611839s 1.163555388s 1.210569841s 1.235643698s 1.276319805s 1.322742908s 1.337910418s 1.340001484s 1.356831354s 1.377368049s 1.390608744s 1.395489621s 1.398710825s 1.450648402s] Jun 26 21:31:07.901: INFO: 50 %ile: 870.33826ms Jun 26 21:31:07.901: INFO: 90 %ile: 1.043861646s Jun 26 21:31:07.901: INFO: 99 %ile: 1.398710825s Jun 26 21:31:07.901: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:31:07.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6481" for this suite. • [SLOW TEST:16.281 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":60,"skipped":1061,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:31:07.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 26 21:31:14.437: INFO: 5 pods remaining Jun 26 21:31:14.437: INFO: 0 pods has nil DeletionTimestamp Jun 26 21:31:14.437: INFO: Jun 26 21:31:15.565: INFO: 0 pods remaining Jun 26 21:31:15.565: INFO: 0 pods has nil DeletionTimestamp Jun 26 21:31:15.565: INFO: Jun 26 21:31:16.351: INFO: 0 pods remaining Jun 26 21:31:16.351: INFO: 0 pods has nil DeletionTimestamp Jun 26 21:31:16.351: INFO: STEP: Gathering metrics W0626 21:31:17.642048 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 21:31:17.642: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:31:17.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3766" for this suite. • [SLOW TEST:10.022 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":61,"skipped":1080,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:31:17.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-e867708a-b9ce-4652-923b-539892c28cf4 STEP: Creating a pod to test consume configMaps Jun 26 21:31:18.790: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33" in namespace "projected-4230" to be "success or failure" Jun 26 21:31:18.926: INFO: Pod "pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33": Phase="Pending", Reason="", readiness=false. Elapsed: 135.999532ms Jun 26 21:31:20.941: INFO: Pod "pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150830559s Jun 26 21:31:22.981: INFO: Pod "pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.190183724s STEP: Saw pod success Jun 26 21:31:22.981: INFO: Pod "pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33" satisfied condition "success or failure" Jun 26 21:31:22.990: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33 container projected-configmap-volume-test: STEP: delete the pod Jun 26 21:31:23.106: INFO: Waiting for pod pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33 to disappear Jun 26 21:31:23.113: INFO: Pod pod-projected-configmaps-87ebc053-80b7-4e30-a700-9d1bd64ebb33 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:31:23.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4230" for this suite. • [SLOW TEST:5.208 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1102,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:31:23.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jun 26 21:31:23.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 26 21:31:23.439: INFO: stderr: "" Jun 26 21:31:23.439: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:31:23.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7211" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":63,"skipped":1105,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:31:23.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2541/configmap-test-2977b33f-458d-4356-9e53-ec229ff2edf8 STEP: Creating a pod to test consume configMaps Jun 26 21:31:23.928: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6" in namespace "configmap-2541" to be "success or failure" Jun 26 21:31:23.959: INFO: Pod "pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.983676ms Jun 26 21:31:26.238: INFO: Pod "pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30985063s Jun 26 21:31:28.289: INFO: Pod "pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6": Phase="Running", Reason="", readiness=true. Elapsed: 4.360618089s Jun 26 21:31:30.332: INFO: Pod "pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.403452936s STEP: Saw pod success Jun 26 21:31:30.332: INFO: Pod "pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6" satisfied condition "success or failure" Jun 26 21:31:30.343: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6 container env-test: STEP: delete the pod Jun 26 21:31:30.508: INFO: Waiting for pod pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6 to disappear Jun 26 21:31:30.521: INFO: Pod pod-configmaps-b6565127-2027-4ec3-83e2-6bce80dd75f6 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:31:30.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2541" for this suite. • [SLOW TEST:7.147 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1116,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:31:30.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7a1c52ac-83bb-43fb-bf60-0061dfc851e6 STEP: Creating a pod to test consume configMaps Jun 26 21:31:30.691: INFO: Waiting up to 5m0s for pod "pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a" in namespace "configmap-5198" to be "success or failure" Jun 26 21:31:30.696: INFO: Pod "pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.457343ms Jun 26 21:31:32.747: INFO: Pod "pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056630591s Jun 26 21:31:34.759: INFO: Pod "pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068385045s STEP: Saw pod success Jun 26 21:31:34.759: INFO: Pod "pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a" satisfied condition "success or failure" Jun 26 21:31:34.762: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a container configmap-volume-test: STEP: delete the pod Jun 26 21:31:34.897: INFO: Waiting for pod pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a to disappear Jun 26 21:31:34.926: INFO: Pod pod-configmaps-2437f555-ef29-4c45-a2ca-c53bb8fe130a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:31:34.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5198" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:31:34.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 26 21:31:35.109: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 26 21:31:45.742: INFO: >>> kubeConfig: /root/.kube/config Jun 26 21:31:48.655: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:31:58.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7191" for this suite. • [SLOW TEST:23.218 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":66,"skipped":1181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:31:58.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jun 26 21:31:58.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6541' Jun 26 21:31:58.525: INFO: stderr: "" Jun 26 21:31:58.525: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 21:31:58.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6541' Jun 26 21:31:58.643: INFO: stderr: "" Jun 26 21:31:58.644: INFO: stdout: "update-demo-nautilus-2qb6f update-demo-nautilus-bhr6p " Jun 26 21:31:58.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qb6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6541' Jun 26 21:31:58.729: INFO: stderr: "" Jun 26 21:31:58.729: INFO: stdout: "" Jun 26 21:31:58.729: INFO: update-demo-nautilus-2qb6f is created but not running Jun 26 21:32:03.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6541' Jun 26 21:32:03.839: INFO: stderr: "" Jun 26 21:32:03.839: INFO: stdout: "update-demo-nautilus-2qb6f update-demo-nautilus-bhr6p " Jun 26 21:32:03.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qb6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6541' Jun 26 21:32:03.935: INFO: stderr: "" Jun 26 21:32:03.935: INFO: stdout: "true" Jun 26 21:32:03.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qb6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6541' Jun 26 21:32:04.026: INFO: stderr: "" Jun 26 21:32:04.026: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 21:32:04.026: INFO: validating pod update-demo-nautilus-2qb6f Jun 26 21:32:04.040: INFO: got data: { "image": "nautilus.jpg" } Jun 26 21:32:04.040: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 21:32:04.040: INFO: update-demo-nautilus-2qb6f is verified up and running Jun 26 21:32:04.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhr6p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6541' Jun 26 21:32:04.140: INFO: stderr: "" Jun 26 21:32:04.140: INFO: stdout: "true" Jun 26 21:32:04.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhr6p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6541' Jun 26 21:32:04.233: INFO: stderr: "" Jun 26 21:32:04.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 21:32:04.233: INFO: validating pod update-demo-nautilus-bhr6p Jun 26 21:32:04.251: INFO: got data: { "image": "nautilus.jpg" } Jun 26 21:32:04.251: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 21:32:04.251: INFO: update-demo-nautilus-bhr6p is verified up and running STEP: using delete to clean up resources Jun 26 21:32:04.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6541' Jun 26 21:32:04.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 21:32:04.351: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 26 21:32:04.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6541' Jun 26 21:32:04.450: INFO: stderr: "No resources found in kubectl-6541 namespace.\n" Jun 26 21:32:04.450: INFO: stdout: "" Jun 26 21:32:04.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6541 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 26 21:32:04.549: INFO: stderr: "" Jun 26 21:32:04.549: INFO: stdout: "update-demo-nautilus-2qb6f\nupdate-demo-nautilus-bhr6p\n" Jun 26 21:32:05.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6541' Jun 26 21:32:05.148: INFO: stderr: "No resources found in kubectl-6541 namespace.\n" Jun 26 21:32:05.148: INFO: stdout: "" Jun 26 21:32:05.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6541 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 26 21:32:05.240: INFO: stderr: "" Jun 26 21:32:05.240: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:05.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6541" for this suite. • [SLOW TEST:7.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":67,"skipped":1215,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:05.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 26 21:32:06.123: INFO: >>> kubeConfig: /root/.kube/config Jun 26 21:32:08.361: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:18.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-543" for this suite. • [SLOW TEST:13.436 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":68,"skipped":1218,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:18.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 26 21:32:18.911: INFO: Waiting up to 5m0s for pod "pod-036125ba-622e-4a0c-b91f-9bbaede50d49" in namespace "emptydir-8186" to be "success or failure" Jun 26 21:32:18.934: INFO: Pod "pod-036125ba-622e-4a0c-b91f-9bbaede50d49": Phase="Pending", Reason="", readiness=false. Elapsed: 22.329616ms Jun 26 21:32:20.938: INFO: Pod "pod-036125ba-622e-4a0c-b91f-9bbaede50d49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026874856s Jun 26 21:32:22.945: INFO: Pod "pod-036125ba-622e-4a0c-b91f-9bbaede50d49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03427229s STEP: Saw pod success Jun 26 21:32:22.946: INFO: Pod "pod-036125ba-622e-4a0c-b91f-9bbaede50d49" satisfied condition "success or failure" Jun 26 21:32:22.948: INFO: Trying to get logs from node jerma-worker pod pod-036125ba-622e-4a0c-b91f-9bbaede50d49 container test-container: STEP: delete the pod Jun 26 21:32:22.965: INFO: Waiting for pod pod-036125ba-622e-4a0c-b91f-9bbaede50d49 to disappear Jun 26 21:32:22.976: INFO: Pod pod-036125ba-622e-4a0c-b91f-9bbaede50d49 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8186" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1237,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:22.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:23.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1995" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":70,"skipped":1237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:23.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:32:23.199: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68" in namespace "downward-api-3870" to be "success or failure" Jun 26 21:32:23.225: INFO: Pod "downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68": Phase="Pending", Reason="", readiness=false. Elapsed: 26.251824ms Jun 26 21:32:25.229: INFO: Pod "downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030159378s Jun 26 21:32:27.233: INFO: Pod "downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033405514s STEP: Saw pod success Jun 26 21:32:27.233: INFO: Pod "downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68" satisfied condition "success or failure" Jun 26 21:32:27.235: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68 container client-container: STEP: delete the pod Jun 26 21:32:27.272: INFO: Waiting for pod downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68 to disappear Jun 26 21:32:27.311: INFO: Pod downwardapi-volume-ee2bd73c-39b6-4fb5-af00-dd7464593c68 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:27.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3870" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1310,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:27.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 21:32:30.459: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:30.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8291" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:30.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-fkml STEP: Creating a pod to test atomic-volume-subpath Jun 26 21:32:30.799: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fkml" in namespace "subpath-5250" to be "success or failure" Jun 26 21:32:30.928: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Pending", Reason="", readiness=false. Elapsed: 129.475492ms Jun 26 21:32:32.964: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165053163s Jun 26 21:32:34.967: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 4.168556896s Jun 26 21:32:36.972: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 6.173544639s Jun 26 21:32:38.977: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 8.17827651s Jun 26 21:32:40.982: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 10.182912646s Jun 26 21:32:42.986: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 12.186952751s Jun 26 21:32:44.989: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 14.190639263s Jun 26 21:32:46.994: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 16.195086579s Jun 26 21:32:48.998: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 18.199215241s Jun 26 21:32:51.002: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 20.203656393s Jun 26 21:32:53.007: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Running", Reason="", readiness=true. Elapsed: 22.207946298s Jun 26 21:32:55.011: INFO: Pod "pod-subpath-test-configmap-fkml": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.211810341s STEP: Saw pod success Jun 26 21:32:55.011: INFO: Pod "pod-subpath-test-configmap-fkml" satisfied condition "success or failure" Jun 26 21:32:55.014: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-fkml container test-container-subpath-configmap-fkml: STEP: delete the pod Jun 26 21:32:55.050: INFO: Waiting for pod pod-subpath-test-configmap-fkml to disappear Jun 26 21:32:55.078: INFO: Pod pod-subpath-test-configmap-fkml no longer exists STEP: Deleting pod pod-subpath-test-configmap-fkml Jun 26 21:32:55.078: INFO: Deleting pod "pod-subpath-test-configmap-fkml" in namespace "subpath-5250" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5250" for this suite. • [SLOW TEST:24.589 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":73,"skipped":1387,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:55.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-52ca7595-7a53-45a4-b62d-8588c4b07715 STEP: Creating a pod to test consume secrets Jun 26 21:32:55.270: INFO: Waiting up to 5m0s for pod "pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a" in namespace "secrets-2444" to be "success or failure" Jun 26 21:32:55.275: INFO: Pod "pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.406721ms Jun 26 21:32:57.280: INFO: Pod "pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009852277s Jun 26 21:32:59.284: INFO: Pod "pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013601904s STEP: Saw pod success Jun 26 21:32:59.284: INFO: Pod "pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a" satisfied condition "success or failure" Jun 26 21:32:59.286: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a container secret-env-test: STEP: delete the pod Jun 26 21:32:59.301: INFO: Waiting for pod pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a to disappear Jun 26 21:32:59.306: INFO: Pod pod-secrets-9387b889-869f-420c-9ec4-72cd255cbe3a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:32:59.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2444" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1389,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:32:59.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:32:59.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358" in namespace "projected-3189" to be "success or failure" Jun 26 21:32:59.427: INFO: Pod "downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358": Phase="Pending", Reason="", readiness=false. Elapsed: 2.415655ms Jun 26 21:33:01.431: INFO: Pod "downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006534693s Jun 26 21:33:03.436: INFO: Pod "downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01085245s STEP: Saw pod success Jun 26 21:33:03.436: INFO: Pod "downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358" satisfied condition "success or failure" Jun 26 21:33:03.440: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358 container client-container: STEP: delete the pod Jun 26 21:33:03.477: INFO: Waiting for pod downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358 to disappear Jun 26 21:33:03.506: INFO: Pod downwardapi-volume-5c701369-a2fe-4305-9b98-5f692c1ea358 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:33:03.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3189" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1402,"failed":0} ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:33:03.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-32dedf75-98e3-40a7-8408-d66d6fb1e4cc in namespace container-probe-7064 Jun 26 21:33:07.665: INFO: Started pod busybox-32dedf75-98e3-40a7-8408-d66d6fb1e4cc in namespace container-probe-7064 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 21:33:07.668: INFO: Initial restart count of pod busybox-32dedf75-98e3-40a7-8408-d66d6fb1e4cc is 0 Jun 26 21:33:57.820: INFO: Restart count of pod container-probe-7064/busybox-32dedf75-98e3-40a7-8408-d66d6fb1e4cc is now 1 (50.151946867s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:33:57.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7064" for this suite. • [SLOW TEST:54.362 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1402,"failed":0} SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:33:57.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jun 26 21:33:58.003: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8102" to be "success or failure" Jun 26 21:33:58.054: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 50.997576ms Jun 26 21:34:00.058: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055037977s Jun 26 21:34:02.063: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059511457s Jun 26 21:34:04.068: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064169616s STEP: Saw pod success Jun 26 21:34:04.068: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 26 21:34:04.071: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 26 21:34:04.089: INFO: Waiting for pod pod-host-path-test to disappear Jun 26 21:34:04.093: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:34:04.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8102" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1406,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:34:04.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:34:04.171: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 26 21:34:07.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 create -f -' Jun 26 21:34:10.557: INFO: stderr: "" Jun 26 21:34:10.557: INFO: stdout: "e2e-test-crd-publish-openapi-555-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 26 21:34:10.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 delete e2e-test-crd-publish-openapi-555-crds test-foo' Jun 26 21:34:10.697: INFO: stderr: "" Jun 26 21:34:10.697: INFO: stdout: "e2e-test-crd-publish-openapi-555-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 26 21:34:10.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 apply -f -' Jun 26 21:34:10.944: INFO: stderr: "" Jun 26 21:34:10.944: INFO: stdout: "e2e-test-crd-publish-openapi-555-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 26 21:34:10.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 delete e2e-test-crd-publish-openapi-555-crds test-foo' Jun 26 21:34:11.057: INFO: stderr: "" Jun 26 21:34:11.057: INFO: stdout: "e2e-test-crd-publish-openapi-555-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 26 21:34:11.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 create -f -' Jun 26 21:34:11.302: INFO: rc: 1 Jun 26 21:34:11.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 apply -f -' Jun 26 21:34:11.536: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 26 21:34:11.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 create -f -' Jun 26 21:34:11.798: INFO: rc: 1 Jun 26 21:34:11.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-247 apply -f -' Jun 26 21:34:12.052: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 26 21:34:12.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-555-crds' Jun 26 21:34:12.298: INFO: stderr: "" Jun 26 21:34:12.298: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-555-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 26 21:34:12.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-555-crds.metadata' Jun 26 21:34:12.558: INFO: stderr: "" Jun 26 21:34:12.558: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-555-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 26 21:34:12.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-555-crds.spec' Jun 26 21:34:12.801: INFO: stderr: "" Jun 26 21:34:12.801: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-555-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 26 21:34:12.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-555-crds.spec.bars' Jun 26 21:34:13.062: INFO: stderr: "" Jun 26 21:34:13.063: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-555-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 26 21:34:13.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-555-crds.spec.bars2' Jun 26 21:34:13.308: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:34:15.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-247" for this suite. • [SLOW TEST:11.112 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":78,"skipped":1418,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:34:15.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jun 26 21:34:15.262: INFO: namespace kubectl-9592 Jun 26 21:34:15.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9592' Jun 26 21:34:15.563: INFO: stderr: "" Jun 26 21:34:15.564: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 26 21:34:16.567: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:34:16.567: INFO: Found 0 / 1 Jun 26 21:34:17.567: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:34:17.567: INFO: Found 0 / 1 Jun 26 21:34:18.600: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:34:18.600: INFO: Found 0 / 1 Jun 26 21:34:19.568: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:34:19.568: INFO: Found 0 / 1 Jun 26 21:34:20.567: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:34:20.567: INFO: Found 1 / 1 Jun 26 21:34:20.567: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 26 21:34:20.570: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:34:20.570: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 26 21:34:20.570: INFO: wait on agnhost-master startup in kubectl-9592 Jun 26 21:34:20.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-qg99z agnhost-master --namespace=kubectl-9592' Jun 26 21:34:20.717: INFO: stderr: "" Jun 26 21:34:20.717: INFO: stdout: "Paused\n" STEP: exposing RC Jun 26 21:34:20.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9592' Jun 26 21:34:20.862: INFO: stderr: "" Jun 26 21:34:20.862: INFO: stdout: "service/rm2 exposed\n" Jun 26 21:34:20.874: INFO: Service rm2 in namespace kubectl-9592 found. STEP: exposing service Jun 26 21:34:22.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9592' Jun 26 21:34:23.058: INFO: stderr: "" Jun 26 21:34:23.058: INFO: stdout: "service/rm3 exposed\n" Jun 26 21:34:23.073: INFO: Service rm3 in namespace kubectl-9592 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:34:25.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9592" for this suite. • [SLOW TEST:9.875 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":79,"skipped":1428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:34:25.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5031 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5031 I0626 21:34:25.239261 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5031, replica count: 2 I0626 21:34:28.289715 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:34:31.289946 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 21:34:31.290: INFO: Creating new exec pod Jun 26 21:34:36.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5031 execpod6wq55 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 26 21:34:36.622: INFO: stderr: "I0626 21:34:36.452839 1200 log.go:172] (0xc00061c6e0) (0xc0009ea000) Create stream\nI0626 21:34:36.452904 1200 log.go:172] (0xc00061c6e0) (0xc0009ea000) Stream added, broadcasting: 1\nI0626 21:34:36.455914 1200 log.go:172] (0xc00061c6e0) Reply frame received for 1\nI0626 21:34:36.455958 1200 log.go:172] (0xc00061c6e0) (0xc0009ea0a0) Create stream\nI0626 21:34:36.455974 1200 log.go:172] (0xc00061c6e0) (0xc0009ea0a0) Stream added, broadcasting: 3\nI0626 21:34:36.456960 1200 log.go:172] (0xc00061c6e0) Reply frame received for 3\nI0626 21:34:36.457030 1200 log.go:172] (0xc00061c6e0) (0xc00070dae0) Create stream\nI0626 21:34:36.457052 1200 log.go:172] (0xc00061c6e0) (0xc00070dae0) Stream added, broadcasting: 5\nI0626 21:34:36.458124 1200 log.go:172] (0xc00061c6e0) Reply frame received for 5\nI0626 21:34:36.587131 1200 log.go:172] (0xc00061c6e0) Data frame received for 5\nI0626 21:34:36.587160 1200 log.go:172] (0xc00070dae0) (5) Data frame handling\nI0626 21:34:36.587176 1200 log.go:172] (0xc00070dae0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0626 21:34:36.613796 1200 log.go:172] (0xc00061c6e0) Data frame received for 5\nI0626 21:34:36.613831 1200 log.go:172] (0xc00070dae0) (5) Data frame handling\nI0626 21:34:36.613848 1200 log.go:172] (0xc00070dae0) (5) Data frame sent\nI0626 21:34:36.613860 1200 log.go:172] (0xc00061c6e0) Data frame received for 5\nI0626 21:34:36.613872 1200 log.go:172] (0xc00070dae0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0626 21:34:36.614167 1200 log.go:172] (0xc00061c6e0) Data frame received for 3\nI0626 21:34:36.614199 1200 log.go:172] (0xc0009ea0a0) (3) Data frame handling\nI0626 21:34:36.616183 1200 log.go:172] (0xc00061c6e0) Data frame received for 1\nI0626 21:34:36.616212 1200 log.go:172] (0xc0009ea000) (1) Data frame handling\nI0626 21:34:36.616240 1200 log.go:172] (0xc0009ea000) (1) Data frame sent\nI0626 21:34:36.616258 1200 log.go:172] (0xc00061c6e0) (0xc0009ea000) Stream removed, broadcasting: 1\nI0626 21:34:36.616371 1200 log.go:172] (0xc00061c6e0) Go away received\nI0626 21:34:36.616542 1200 log.go:172] (0xc00061c6e0) (0xc0009ea000) Stream removed, broadcasting: 1\nI0626 21:34:36.616555 1200 log.go:172] (0xc00061c6e0) (0xc0009ea0a0) Stream removed, broadcasting: 3\nI0626 21:34:36.616561 1200 log.go:172] (0xc00061c6e0) (0xc00070dae0) Stream removed, broadcasting: 5\n" Jun 26 21:34:36.622: INFO: stdout: "" Jun 26 21:34:36.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5031 execpod6wq55 -- /bin/sh -x -c nc -zv -t -w 2 10.107.116.168 80' Jun 26 21:34:36.839: INFO: stderr: "I0626 21:34:36.760124 1220 log.go:172] (0xc0001051e0) (0xc000958000) Create stream\nI0626 21:34:36.760181 1220 log.go:172] (0xc0001051e0) (0xc000958000) Stream added, broadcasting: 1\nI0626 21:34:36.763211 1220 log.go:172] (0xc0001051e0) Reply frame received for 1\nI0626 21:34:36.763255 1220 log.go:172] (0xc0001051e0) (0xc0009580a0) Create stream\nI0626 21:34:36.763272 1220 log.go:172] (0xc0001051e0) (0xc0009580a0) Stream added, broadcasting: 3\nI0626 21:34:36.764494 1220 log.go:172] (0xc0001051e0) Reply frame received for 3\nI0626 21:34:36.764533 1220 log.go:172] (0xc0001051e0) (0xc000958140) Create stream\nI0626 21:34:36.764546 1220 log.go:172] (0xc0001051e0) (0xc000958140) Stream added, broadcasting: 5\nI0626 21:34:36.765731 1220 log.go:172] (0xc0001051e0) Reply frame received for 5\nI0626 21:34:36.831781 1220 log.go:172] (0xc0001051e0) Data frame received for 3\nI0626 21:34:36.831830 1220 log.go:172] (0xc0009580a0) (3) Data frame handling\nI0626 21:34:36.832189 1220 log.go:172] (0xc0001051e0) Data frame received for 5\nI0626 21:34:36.832213 1220 log.go:172] (0xc000958140) (5) Data frame handling\nI0626 21:34:36.832239 1220 log.go:172] (0xc000958140) (5) Data frame sent\nI0626 21:34:36.832274 1220 log.go:172] (0xc0001051e0) Data frame received for 5\n+ nc -zv -t -w 2 10.107.116.168 80\nConnection to 10.107.116.168 80 port [tcp/http] succeeded!\nI0626 21:34:36.832300 1220 log.go:172] (0xc000958140) (5) Data frame handling\nI0626 21:34:36.833400 1220 log.go:172] (0xc0001051e0) Data frame received for 1\nI0626 21:34:36.833427 1220 log.go:172] (0xc000958000) (1) Data frame handling\nI0626 21:34:36.833444 1220 log.go:172] (0xc000958000) (1) Data frame sent\nI0626 21:34:36.833466 1220 log.go:172] (0xc0001051e0) (0xc000958000) Stream removed, broadcasting: 1\nI0626 21:34:36.833490 1220 log.go:172] (0xc0001051e0) Go away received\nI0626 21:34:36.833960 1220 log.go:172] (0xc0001051e0) (0xc000958000) Stream removed, broadcasting: 1\nI0626 21:34:36.833990 1220 log.go:172] (0xc0001051e0) (0xc0009580a0) Stream removed, broadcasting: 3\nI0626 21:34:36.834007 1220 log.go:172] (0xc0001051e0) (0xc000958140) Stream removed, broadcasting: 5\n" Jun 26 21:34:36.839: INFO: stdout: "" Jun 26 21:34:36.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5031 execpod6wq55 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31592' Jun 26 21:34:37.065: INFO: stderr: "I0626 21:34:36.979645 1242 log.go:172] (0xc0000f5600) (0xc000647ae0) Create stream\nI0626 21:34:36.979721 1242 log.go:172] (0xc0000f5600) (0xc000647ae0) Stream added, broadcasting: 1\nI0626 21:34:36.982667 1242 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0626 21:34:36.982718 1242 log.go:172] (0xc0000f5600) (0xc000647cc0) Create stream\nI0626 21:34:36.982737 1242 log.go:172] (0xc0000f5600) (0xc000647cc0) Stream added, broadcasting: 3\nI0626 21:34:36.983901 1242 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0626 21:34:36.983934 1242 log.go:172] (0xc0000f5600) (0xc0009d0000) Create stream\nI0626 21:34:36.983950 1242 log.go:172] (0xc0000f5600) (0xc0009d0000) Stream added, broadcasting: 5\nI0626 21:34:36.984964 1242 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0626 21:34:37.054371 1242 log.go:172] (0xc0000f5600) Data frame received for 3\nI0626 21:34:37.054412 1242 log.go:172] (0xc000647cc0) (3) Data frame handling\nI0626 21:34:37.054438 1242 log.go:172] (0xc0000f5600) Data frame received for 5\nI0626 21:34:37.054452 1242 log.go:172] (0xc0009d0000) (5) Data frame handling\nI0626 21:34:37.054465 1242 log.go:172] (0xc0009d0000) (5) Data frame sent\nI0626 21:34:37.054477 1242 log.go:172] (0xc0000f5600) Data frame received for 5\nI0626 21:34:37.054499 1242 log.go:172] (0xc0009d0000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31592\nConnection to 172.17.0.10 31592 port [tcp/31592] succeeded!\nI0626 21:34:37.056004 1242 log.go:172] (0xc0000f5600) Data frame received for 1\nI0626 21:34:37.056031 1242 log.go:172] (0xc000647ae0) (1) Data frame handling\nI0626 21:34:37.056049 1242 log.go:172] (0xc000647ae0) (1) Data frame sent\nI0626 21:34:37.056064 1242 log.go:172] (0xc0000f5600) (0xc000647ae0) Stream removed, broadcasting: 1\nI0626 21:34:37.056328 1242 log.go:172] (0xc0000f5600) Go away received\nI0626 21:34:37.056396 1242 log.go:172] (0xc0000f5600) (0xc000647ae0) Stream removed, broadcasting: 1\nI0626 21:34:37.056426 1242 log.go:172] (0xc0000f5600) (0xc000647cc0) Stream removed, broadcasting: 3\nI0626 21:34:37.056445 1242 log.go:172] (0xc0000f5600) (0xc0009d0000) Stream removed, broadcasting: 5\n" Jun 26 21:34:37.065: INFO: stdout: "" Jun 26 21:34:37.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5031 execpod6wq55 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31592' Jun 26 21:34:37.269: INFO: stderr: "I0626 21:34:37.205740 1265 log.go:172] (0xc0006aca50) (0xc0006a41e0) Create stream\nI0626 21:34:37.205796 1265 log.go:172] (0xc0006aca50) (0xc0006a41e0) Stream added, broadcasting: 1\nI0626 21:34:37.208579 1265 log.go:172] (0xc0006aca50) Reply frame received for 1\nI0626 21:34:37.208642 1265 log.go:172] (0xc0006aca50) (0xc0006fa000) Create stream\nI0626 21:34:37.208673 1265 log.go:172] (0xc0006aca50) (0xc0006fa000) Stream added, broadcasting: 3\nI0626 21:34:37.209792 1265 log.go:172] (0xc0006aca50) Reply frame received for 3\nI0626 21:34:37.209837 1265 log.go:172] (0xc0006aca50) (0xc0006a4280) Create stream\nI0626 21:34:37.209855 1265 log.go:172] (0xc0006aca50) (0xc0006a4280) Stream added, broadcasting: 5\nI0626 21:34:37.210818 1265 log.go:172] (0xc0006aca50) Reply frame received for 5\nI0626 21:34:37.260039 1265 log.go:172] (0xc0006aca50) Data frame received for 5\nI0626 21:34:37.260068 1265 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0626 21:34:37.260075 1265 log.go:172] (0xc0006a4280) (5) Data frame sent\nI0626 21:34:37.260080 1265 log.go:172] (0xc0006aca50) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.8 31592\nConnection to 172.17.0.8 31592 port [tcp/31592] succeeded!\nI0626 21:34:37.260090 1265 log.go:172] (0xc0006aca50) Data frame received for 3\nI0626 21:34:37.260108 1265 log.go:172] (0xc0006fa000) (3) Data frame handling\nI0626 21:34:37.260122 1265 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0626 21:34:37.261729 1265 log.go:172] (0xc0006aca50) Data frame received for 1\nI0626 21:34:37.261742 1265 log.go:172] (0xc0006a41e0) (1) Data frame handling\nI0626 21:34:37.261755 1265 log.go:172] (0xc0006a41e0) (1) Data frame sent\nI0626 21:34:37.261906 1265 log.go:172] (0xc0006aca50) (0xc0006a41e0) Stream removed, broadcasting: 1\nI0626 21:34:37.262065 1265 log.go:172] (0xc0006aca50) Go away received\nI0626 21:34:37.262142 1265 log.go:172] (0xc0006aca50) (0xc0006a41e0) Stream removed, broadcasting: 1\nI0626 21:34:37.262154 1265 log.go:172] (0xc0006aca50) (0xc0006fa000) Stream removed, broadcasting: 3\nI0626 21:34:37.262166 1265 log.go:172] (0xc0006aca50) (0xc0006a4280) Stream removed, broadcasting: 5\n" Jun 26 21:34:37.269: INFO: stdout: "" Jun 26 21:34:37.269: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:34:37.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5031" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.238 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":80,"skipped":1458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:34:37.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 26 21:34:37.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 21:34:37.405: INFO: Waiting for terminating namespaces to be deleted... Jun 26 21:34:37.407: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 26 21:34:37.411: INFO: agnhost-master-qg99z from kubectl-9592 started at 2020-06-26 21:34:15 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.411: INFO: Container agnhost-master ready: false, restart count 0 Jun 26 21:34:37.411: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.411: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:34:37.411: INFO: externalname-service-mb7hl from services-5031 started at 2020-06-26 21:34:25 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.411: INFO: Container externalname-service ready: true, restart count 0 Jun 26 21:34:37.411: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.411: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:34:37.411: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 26 21:34:37.416: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.416: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:34:37.416: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.416: INFO: Container kube-bench ready: false, restart count 0 Jun 26 21:34:37.416: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.416: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:34:37.416: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.416: INFO: Container kube-hunter ready: false, restart count 0 Jun 26 21:34:37.416: INFO: externalname-service-ms59t from services-5031 started at 2020-06-26 21:34:25 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.416: INFO: Container externalname-service ready: true, restart count 0 Jun 26 21:34:37.416: INFO: execpod6wq55 from services-5031 started at 2020-06-26 21:34:31 +0000 UTC (1 container statuses recorded) Jun 26 21:34:37.416: INFO: Container agnhost-pause ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f2644b4d-01e5-4314-9eef-f169e31fcf00 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-f2644b4d-01e5-4314-9eef-f169e31fcf00 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f2644b4d-01e5-4314-9eef-f169e31fcf00 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:39:45.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1291" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.251 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":81,"skipped":1486,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:39:45.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 26 21:39:52.169: INFO: Successfully updated pod "adopt-release-5dkrd" STEP: Checking that the Job readopts the Pod Jun 26 21:39:52.169: INFO: Waiting up to 15m0s for pod "adopt-release-5dkrd" in namespace "job-6586" to be "adopted" Jun 26 21:39:52.204: INFO: Pod "adopt-release-5dkrd": Phase="Running", Reason="", readiness=true. Elapsed: 34.692899ms Jun 26 21:39:54.228: INFO: Pod "adopt-release-5dkrd": Phase="Running", Reason="", readiness=true. Elapsed: 2.058688488s Jun 26 21:39:54.228: INFO: Pod "adopt-release-5dkrd" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 26 21:39:54.736: INFO: Successfully updated pod "adopt-release-5dkrd" STEP: Checking that the Job releases the Pod Jun 26 21:39:54.736: INFO: Waiting up to 15m0s for pod "adopt-release-5dkrd" in namespace "job-6586" to be "released" Jun 26 21:39:54.741: INFO: Pod "adopt-release-5dkrd": Phase="Running", Reason="", readiness=true. Elapsed: 4.450177ms Jun 26 21:39:56.747: INFO: Pod "adopt-release-5dkrd": Phase="Running", Reason="", readiness=true. Elapsed: 2.010890585s Jun 26 21:39:56.747: INFO: Pod "adopt-release-5dkrd" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:39:56.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6586" for this suite. • [SLOW TEST:11.205 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":82,"skipped":1488,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:39:56.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jun 26 21:39:56.988: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jun 26 21:39:56.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5596' Jun 26 21:39:57.296: INFO: stderr: "" Jun 26 21:39:57.296: INFO: stdout: "service/agnhost-slave created\n" Jun 26 21:39:57.297: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jun 26 21:39:57.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5596' Jun 26 21:39:57.577: INFO: stderr: "" Jun 26 21:39:57.577: INFO: stdout: "service/agnhost-master created\n" Jun 26 21:39:57.577: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 26 21:39:57.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5596' Jun 26 21:39:57.841: INFO: stderr: "" Jun 26 21:39:57.841: INFO: stdout: "service/frontend created\n" Jun 26 21:39:57.841: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 26 21:39:57.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5596' Jun 26 21:39:58.135: INFO: stderr: "" Jun 26 21:39:58.135: INFO: stdout: "deployment.apps/frontend created\n" Jun 26 21:39:58.135: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 26 21:39:58.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5596' Jun 26 21:39:58.430: INFO: stderr: "" Jun 26 21:39:58.431: INFO: stdout: "deployment.apps/agnhost-master created\n" Jun 26 21:39:58.431: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 26 21:39:58.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5596' Jun 26 21:39:58.706: INFO: stderr: "" Jun 26 21:39:58.706: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jun 26 21:39:58.706: INFO: Waiting for all frontend pods to be Running. Jun 26 21:40:08.756: INFO: Waiting for frontend to serve content. Jun 26 21:40:08.768: INFO: Trying to add a new entry to the guestbook. Jun 26 21:40:08.777: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 26 21:40:08.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5596' Jun 26 21:40:08.954: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 21:40:08.954: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jun 26 21:40:08.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5596' Jun 26 21:40:09.100: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 21:40:09.100: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 26 21:40:09.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5596' Jun 26 21:40:09.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 21:40:09.217: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 26 21:40:09.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5596' Jun 26 21:40:09.328: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 21:40:09.328: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 26 21:40:09.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5596' Jun 26 21:40:09.441: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 21:40:09.441: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 26 21:40:09.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5596' Jun 26 21:40:09.713: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 21:40:09.713: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:40:09.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5596" for this suite. • [SLOW TEST:13.067 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":83,"skipped":1498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:40:09.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-84c3321d-b6e1-4cbf-940f-4d6dcfd7e9c5 in namespace container-probe-4295 Jun 26 21:40:16.387: INFO: Started pod liveness-84c3321d-b6e1-4cbf-940f-4d6dcfd7e9c5 in namespace container-probe-4295 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 21:40:16.390: INFO: Initial restart count of pod liveness-84c3321d-b6e1-4cbf-940f-4d6dcfd7e9c5 is 0 Jun 26 21:40:36.443: INFO: Restart count of pod container-probe-4295/liveness-84c3321d-b6e1-4cbf-940f-4d6dcfd7e9c5 is now 1 (20.052660658s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:40:36.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4295" for this suite. • [SLOW TEST:26.640 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:40:36.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:40:37.485: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:40:39.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804437, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804437, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804437, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804437, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:40:42.528: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:40:42.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-306-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:40:43.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1842" for this suite. STEP: Destroying namespace "webhook-1842-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":85,"skipped":1587,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:40:43.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:40:44.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:40:46.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804444, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804444, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804444, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804444, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:40:49.285: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:40:49.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6080" for this suite. STEP: Destroying namespace "webhook-6080-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.548 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":86,"skipped":1604,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:40:49.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jun 26 21:40:49.489: INFO: Waiting up to 5m0s for pod "client-containers-5b291580-4082-4f6c-8142-36248c1dfba0" in namespace "containers-6032" to be "success or failure" Jun 26 21:40:49.522: INFO: Pod "client-containers-5b291580-4082-4f6c-8142-36248c1dfba0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.855193ms Jun 26 21:40:51.534: INFO: Pod "client-containers-5b291580-4082-4f6c-8142-36248c1dfba0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04462644s Jun 26 21:40:53.538: INFO: Pod "client-containers-5b291580-4082-4f6c-8142-36248c1dfba0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048741738s STEP: Saw pod success Jun 26 21:40:53.538: INFO: Pod "client-containers-5b291580-4082-4f6c-8142-36248c1dfba0" satisfied condition "success or failure" Jun 26 21:40:53.540: INFO: Trying to get logs from node jerma-worker2 pod client-containers-5b291580-4082-4f6c-8142-36248c1dfba0 container test-container: STEP: delete the pod Jun 26 21:40:53.572: INFO: Waiting for pod client-containers-5b291580-4082-4f6c-8142-36248c1dfba0 to disappear Jun 26 21:40:53.576: INFO: Pod client-containers-5b291580-4082-4f6c-8142-36248c1dfba0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:40:53.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6032" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1608,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:40:53.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 21:40:53.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1048' Jun 26 21:40:53.785: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 26 21:40:53.785: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Jun 26 21:40:53.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1048' Jun 26 21:40:53.938: INFO: stderr: "" Jun 26 21:40:53.938: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:40:53.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1048" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":88,"skipped":1613,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:40:53.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:40:54.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd" in namespace "downward-api-8927" to be "success or failure" Jun 26 21:40:54.068: INFO: Pod "downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.823339ms Jun 26 21:40:56.108: INFO: Pod "downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042219661s Jun 26 21:40:58.112: INFO: Pod "downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046085052s STEP: Saw pod success Jun 26 21:40:58.112: INFO: Pod "downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd" satisfied condition "success or failure" Jun 26 21:40:58.115: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd container client-container: STEP: delete the pod Jun 26 21:40:58.170: INFO: Waiting for pod downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd to disappear Jun 26 21:40:58.198: INFO: Pod downwardapi-volume-eae7ba58-0dca-4c21-bb61-b5510fd4d1bd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:40:58.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8927" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1620,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:40:58.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:40:58.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1" in namespace "projected-8014" to be "success or failure" Jun 26 21:40:58.287: INFO: Pod "downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.035377ms Jun 26 21:41:00.312: INFO: Pod "downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04049952s Jun 26 21:41:02.317: INFO: Pod "downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045286745s STEP: Saw pod success Jun 26 21:41:02.317: INFO: Pod "downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1" satisfied condition "success or failure" Jun 26 21:41:02.320: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1 container client-container: STEP: delete the pod Jun 26 21:41:02.385: INFO: Waiting for pod downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1 to disappear Jun 26 21:41:02.396: INFO: Pod downwardapi-volume-f79bee95-ab9f-4cec-b895-c7e708ad98d1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:41:02.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8014" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:41:02.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-5509a624-14e7-40ee-9203-024afa470e80 STEP: Creating a pod to test consume secrets Jun 26 21:41:02.458: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b" in namespace "projected-3524" to be "success or failure" Jun 26 21:41:02.499: INFO: Pod "pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.134517ms Jun 26 21:41:04.542: INFO: Pod "pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084446231s Jun 26 21:41:06.546: INFO: Pod "pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088236786s STEP: Saw pod success Jun 26 21:41:06.546: INFO: Pod "pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b" satisfied condition "success or failure" Jun 26 21:41:06.548: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b container projected-secret-volume-test: STEP: delete the pod Jun 26 21:41:06.605: INFO: Waiting for pod pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b to disappear Jun 26 21:41:06.643: INFO: Pod pod-projected-secrets-90901205-0a3f-4f84-bde8-96efc09e4d5b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:41:06.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3524" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:41:06.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 26 21:41:07.123: INFO: Waiting up to 5m0s for pod "pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c" in namespace "emptydir-3943" to be "success or failure" Jun 26 21:41:07.180: INFO: Pod "pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.440417ms Jun 26 21:41:09.184: INFO: Pod "pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060818323s Jun 26 21:41:11.188: INFO: Pod "pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065036449s STEP: Saw pod success Jun 26 21:41:11.188: INFO: Pod "pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c" satisfied condition "success or failure" Jun 26 21:41:11.191: INFO: Trying to get logs from node jerma-worker pod pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c container test-container: STEP: delete the pod Jun 26 21:41:11.242: INFO: Waiting for pod pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c to disappear Jun 26 21:41:11.260: INFO: Pod pod-08016c06-67b0-41cf-a1ff-b7d16b036b9c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:41:11.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3943" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:41:11.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jun 26 21:41:11.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3405' Jun 26 21:41:11.668: INFO: stderr: "" Jun 26 21:41:11.668: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 26 21:41:12.690: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:41:12.690: INFO: Found 0 / 1 Jun 26 21:41:13.673: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:41:13.673: INFO: Found 0 / 1 Jun 26 21:41:14.721: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:41:14.721: INFO: Found 1 / 1 Jun 26 21:41:14.721: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 26 21:41:14.732: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:41:14.732: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 26 21:41:14.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-clkcp --namespace=kubectl-3405 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 26 21:41:14.841: INFO: stderr: "" Jun 26 21:41:14.842: INFO: stdout: "pod/agnhost-master-clkcp patched\n" STEP: checking annotations Jun 26 21:41:14.856: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 21:41:14.856: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:41:14.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3405" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":93,"skipped":1742,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:41:14.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:41:30.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7683" for this suite. • [SLOW TEST:16.069 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":94,"skipped":1747,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:41:30.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:41:31.015: INFO: Creating deployment "webserver-deployment" Jun 26 21:41:31.019: INFO: Waiting for observed generation 1 Jun 26 21:41:33.032: INFO: Waiting for all required pods to come up Jun 26 21:41:33.037: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 26 21:41:43.051: INFO: Waiting for deployment "webserver-deployment" to complete Jun 26 21:41:43.057: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 26 21:41:43.062: INFO: Updating deployment webserver-deployment Jun 26 21:41:43.062: INFO: Waiting for observed generation 2 Jun 26 21:41:45.071: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 26 21:41:45.073: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 26 21:41:45.075: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 26 21:41:45.083: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 26 21:41:45.083: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 26 21:41:45.216: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 26 21:41:45.223: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 26 21:41:45.223: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 26 21:41:45.227: INFO: Updating deployment webserver-deployment Jun 26 21:41:45.227: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 26 21:41:45.631: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 26 21:41:45.722: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 26 21:41:46.096: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9945 /apis/apps/v1/namespaces/deployment-9945/deployments/webserver-deployment 0bfd7b29-328b-4d1d-b2e1-2276e1372618 27538440 3 2020-06-26 21:41:31 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028e1378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-06-26 21:41:43 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-26 21:41:45 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 26 21:41:46.160: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9945 /apis/apps/v1/namespaces/deployment-9945/replicasets/webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 27538499 3 2020-06-26 21:41:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 0bfd7b29-328b-4d1d-b2e1-2276e1372618 0xc002b75597 0xc002b75598}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b75608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 21:41:46.160: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 26 21:41:46.160: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9945 /apis/apps/v1/namespaces/deployment-9945/replicasets/webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 27538505 3 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 0bfd7b29-328b-4d1d-b2e1-2276e1372618 0xc002b754d7 0xc002b754d8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b75538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 26 21:41:46.240: INFO: Pod "webserver-deployment-595b5b9587-5kmtj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5kmtj webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-5kmtj 210899b0-968f-4acd-9fd5-d3544f8aa52b 27538480 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc002b75ab7 0xc002b75ab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-26 21:41:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.240: INFO: Pod "webserver-deployment-595b5b9587-7jqcb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7jqcb webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-7jqcb 40672ae1-9f4c-41f0-9cbe-3cbbe1577c93 27538496 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc002b75c17 0xc002b75c18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-26 21:41:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.240: INFO: Pod "webserver-deployment-595b5b9587-8l9fq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8l9fq webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-8l9fq 75ebbf44-6890-4d3d-8479-0a61887410e1 27538352 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc002b75d77 0xc002b75d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.170,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc49a468fe55df1335ec8f573f611be495bb5c70584a6a5bd390b8fd7939740b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.241: INFO: Pod "webserver-deployment-595b5b9587-b9xv4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b9xv4 webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-b9xv4 04ded42b-b8ce-4684-8185-5c6487071586 27538297 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc002b75ef7 0xc002b75ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.169,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://94a0ef3cf0cd63acfbd715058990c6be11b5211de65a16049079b326c7473c5d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.241: INFO: Pod "webserver-deployment-595b5b9587-bfstb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bfstb webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-bfstb fc6feea1-c916-464d-8bb0-d2ef05830e6a 27538467 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686077 0xc003686078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.241: INFO: Pod "webserver-deployment-595b5b9587-cjqct" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cjqct webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-cjqct 26237428-e3a9-4fe0-b2d1-397ec7b9c5bf 27538261 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686197 0xc003686198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.168,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a71971ec9b091b550092c62f7fca36c890f576d986aed9fe637cacc4af10afc0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.168,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.241: INFO: Pod "webserver-deployment-595b5b9587-gbtsx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gbtsx webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-gbtsx 45d3e6cb-f1cd-4e21-92be-3ca903d5d837 27538296 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686317 0xc003686318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.81,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://60ac931cc6cb57e7d4ab5278331d91014b4d4a9cf97a39944a9116141fbf3771,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.241: INFO: Pod "webserver-deployment-595b5b9587-gnd28" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gnd28 webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-gnd28 23a81d4c-ab6f-4076-8c19-69980ebeb20a 27538471 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686497 0xc003686498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.241: INFO: Pod "webserver-deployment-595b5b9587-h5t4h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h5t4h webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-h5t4h 7d4c4532-840e-4bbc-937b-75be7a6ae672 27538472 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc0036865b7 0xc0036865b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.242: INFO: Pod "webserver-deployment-595b5b9587-krvnh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-krvnh webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-krvnh 501dc8dd-7505-408b-ab8f-6beba4c9c016 27538468 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc0036866d7 0xc0036866d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.242: INFO: Pod "webserver-deployment-595b5b9587-kvs8n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kvs8n webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-kvs8n 3300063b-466e-4c67-8498-eb2fa7893c6c 27538502 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686807 0xc003686808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-26 21:41:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.242: INFO: Pod "webserver-deployment-595b5b9587-ll56h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ll56h webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-ll56h 4a3ad518-e214-4bd3-8d12-d868a9973044 27538308 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686967 0xc003686968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.82,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://57e442ae1b369f6caff315c2a08865c93f00c0f0a49b619c626811c11c8d2cf9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.242: INFO: Pod "webserver-deployment-595b5b9587-lpj5g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lpj5g webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-lpj5g f07a5fab-f889-4121-b6f9-ff1ddd6a4e2e 27538444 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686ae7 0xc003686ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.242: INFO: Pod "webserver-deployment-595b5b9587-n7mkb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n7mkb webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-n7mkb aa9f11ba-df40-4631-b55b-6a1aabfe8e40 27538451 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686c07 0xc003686c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.242: INFO: Pod "webserver-deployment-595b5b9587-nsd9j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nsd9j webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-nsd9j b4b6ebf0-5e09-41bd-9c1c-836b10d9c468 27538336 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686d27 0xc003686d28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.84,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b7f2b99c511921059c829a81ea7d3759e1253540887e20fd1c10b034a7f6939f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.243: INFO: Pod "webserver-deployment-595b5b9587-pvmdk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pvmdk webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-pvmdk 03f9dd55-95b9-4a58-a576-957076dff732 27538332 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003686ea7 0xc003686ea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.85,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1520bc18f66083372219608ff71c619f8eca79e0f2255cb61547275768194133,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.243: INFO: Pod "webserver-deployment-595b5b9587-rbhgh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rbhgh webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-rbhgh 1c45851d-a44e-4428-b992-310607722a6d 27538443 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003687027 0xc003687028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.243: INFO: Pod "webserver-deployment-595b5b9587-rtqhs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rtqhs webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-rtqhs ee792bf4-b148-46d4-9c8e-434b9e33ddf2 27538470 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003687147 0xc003687148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.243: INFO: Pod "webserver-deployment-595b5b9587-spmjp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-spmjp webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-spmjp 8bd07885-ea57-42b0-8a78-e6f5936d76be 27538441 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003687267 0xc003687268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.243: INFO: Pod "webserver-deployment-595b5b9587-t7vtb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t7vtb webserver-deployment-595b5b9587- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-595b5b9587-t7vtb 7f69f2af-eace-4a01-9fba-60a2f2c1a02c 27538303 0 2020-06-26 21:41:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5feca8ad-8f19-4888-a309-7c8ee4e58e5e 0xc003687387 0xc003687388}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.83,StartTime:2020-06-26 21:41:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:41:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d4682877dc2f1f02fe25299a525490332c5ce5ff8ac0cd593c18f386bfc6625a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.244: INFO: Pod "webserver-deployment-c7997dcc8-26g4q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-26g4q webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-26g4q 12a19495-64b9-4050-ab25-329fbfe4fc9e 27538494 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc003687507 0xc003687508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.244: INFO: Pod "webserver-deployment-c7997dcc8-8kk6v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8kk6v webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-8kk6v 7f549d8d-2406-4389-8350-d44df3739f44 27538478 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc003687637 0xc003687638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.244: INFO: Pod "webserver-deployment-c7997dcc8-8ndrf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8ndrf webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-8ndrf c9c92a8e-e54f-450e-ad46-c429647dc462 27538462 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc003687767 0xc003687768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.244: INFO: Pod "webserver-deployment-c7997dcc8-9lhf4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9lhf4 webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-9lhf4 725629bf-6732-471b-9266-6a28bfbdad11 27538404 0 2020-06-26 21:41:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc003687897 0xc003687898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-26 21:41:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.244: INFO: Pod "webserver-deployment-c7997dcc8-cs2tp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cs2tp webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-cs2tp 8e0c79ee-6fda-4e70-9fa8-385882f7254d 27538506 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc003687a57 0xc003687a58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-26 21:41:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.244: INFO: Pod "webserver-deployment-c7997dcc8-hpcq4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hpcq4 webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-hpcq4 256dad37-bb9b-485c-ba50-293b1df54e9d 27538392 0 2020-06-26 21:41:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc003687d87 0xc003687d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-26 21:41:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.244: INFO: Pod "webserver-deployment-c7997dcc8-j5xfx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j5xfx webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-j5xfx cc5d4628-dad7-4730-96c6-c8b3383c265d 27538477 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc003687fb7 0xc003687fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.245: INFO: Pod "webserver-deployment-c7997dcc8-j6ndn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j6ndn webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-j6ndn 78fb5428-656a-4a0b-bf35-09a8a040d76d 27538479 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc00365e0f7 0xc00365e0f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.245: INFO: Pod "webserver-deployment-c7997dcc8-jc7jj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jc7jj webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-jc7jj c4f446cf-f3c3-46bf-bfda-be47e8f0f544 27538476 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc00365e227 0xc00365e228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.245: INFO: Pod "webserver-deployment-c7997dcc8-q45x7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q45x7 webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-q45x7 44124e30-e481-4e64-a4d5-859bddd174b0 27538384 0 2020-06-26 21:41:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc00365e357 0xc00365e358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-26 21:41:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.245: INFO: Pod "webserver-deployment-c7997dcc8-skkcz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-skkcz webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-skkcz 2ee0b864-c2b6-4e7e-9923-e3061444e6aa 27538469 0 2020-06-26 21:41:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc00365e4e7 0xc00365e4e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.245: INFO: Pod "webserver-deployment-c7997dcc8-wfr59" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wfr59 webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-wfr59 d7347d2c-05b6-49e9-9120-cc26a3bf0f05 27538413 0 2020-06-26 21:41:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc00365e617 0xc00365e618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-26 21:41:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 21:41:46.246: INFO: Pod "webserver-deployment-c7997dcc8-xj2sd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xj2sd webserver-deployment-c7997dcc8- deployment-9945 /api/v1/namespaces/deployment-9945/pods/webserver-deployment-c7997dcc8-xj2sd 540f679d-f1e6-48cc-948b-2cf65d38bdd8 27538408 0 2020-06-26 21:41:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 8f01ba76-3e50-474b-8df0-664ea44dc5d7 0xc00365e797 0xc00365e798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mrr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mrr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mrr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:41:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-26 21:41:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:41:46.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9945" for this suite. • [SLOW TEST:15.437 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":95,"skipped":1760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:41:46.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 26 21:41:46.650: INFO: Waiting up to 5m0s for pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625" in namespace "downward-api-1457" to be "success or failure" Jun 26 21:41:46.663: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 12.441488ms Jun 26 21:41:48.978: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327980633s Jun 26 21:41:51.069: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 4.418885026s Jun 26 21:41:53.407: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756536716s Jun 26 21:41:55.440: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 8.789559016s Jun 26 21:41:57.565: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 10.914723645s Jun 26 21:41:59.830: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 13.179993061s Jun 26 21:42:01.870: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Pending", Reason="", readiness=false. Elapsed: 15.220056025s Jun 26 21:42:04.080: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.429834987s STEP: Saw pod success Jun 26 21:42:04.080: INFO: Pod "downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625" satisfied condition "success or failure" Jun 26 21:42:04.131: INFO: Trying to get logs from node jerma-worker pod downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625 container dapi-container: STEP: delete the pod Jun 26 21:42:04.554: INFO: Waiting for pod downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625 to disappear Jun 26 21:42:04.679: INFO: Pod downward-api-01699c93-0f75-43d8-b7c8-e9d7cac38625 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:42:04.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1457" for this suite. • [SLOW TEST:18.803 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1787,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:42:05.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 26 21:42:06.200: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 21:42:06.599: INFO: Waiting for terminating namespaces to be deleted... Jun 26 21:42:06.801: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 26 21:42:06.965: INFO: webserver-deployment-595b5b9587-nsd9j from deployment-9945 started at 2020-06-26 21:41:31 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-595b5b9587-pvmdk from deployment-9945 started at 2020-06-26 21:41:31 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-c7997dcc8-cs2tp from deployment-9945 started at 2020-06-26 21:41:45 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-595b5b9587-rbhgh from deployment-9945 started at 2020-06-26 21:41:45 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-c7997dcc8-8ndrf from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:06.965: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:42:06.965: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-595b5b9587-t7vtb from deployment-9945 started at 2020-06-26 21:41:31 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-c7997dcc8-9lhf4 from deployment-9945 started at 2020-06-26 21:41:43 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-c7997dcc8-jc7jj from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-c7997dcc8-hpcq4 from deployment-9945 started at 2020-06-26 21:41:43 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-c7997dcc8-8kk6v from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:06.965: INFO: webserver-deployment-595b5b9587-rtqhs from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:06.965: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:06.965: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 26 21:42:07.006: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container kube-bench ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-c7997dcc8-wfr59 from deployment-9945 started at 2020-06-26 21:41:43 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-595b5b9587-lpj5g from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:07.006: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 21:42:07.006: INFO: webserver-deployment-c7997dcc8-xj2sd from deployment-9945 started at 2020-06-26 21:41:43 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-595b5b9587-n7mkb from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:07.006: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-595b5b9587-spmjp from deployment-9945 started at 2020-06-26 21:41:45 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: true, restart count 0 Jun 26 21:42:07.006: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container kube-hunter ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-c7997dcc8-skkcz from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-c7997dcc8-j5xfx from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-c7997dcc8-j6ndn from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-c7997dcc8-q45x7 from deployment-9945 started at 2020-06-26 21:41:43 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: false, restart count 0 Jun 26 21:42:07.006: INFO: webserver-deployment-c7997dcc8-26g4q from deployment-9945 started at 2020-06-26 21:41:46 +0000 UTC (1 container statuses recorded) Jun 26 21:42:07.006: INFO: Container httpd ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6d9372d2-e729-473e-81a1-261dde07dcaa 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-6d9372d2-e729-473e-81a1-261dde07dcaa off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6d9372d2-e729-473e-81a1-261dde07dcaa [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:42:30.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6738" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:25.299 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":97,"skipped":1793,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:42:30.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 26 21:42:30.579: INFO: Waiting up to 5m0s for pod "downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8" in namespace "downward-api-1499" to be "success or failure" Jun 26 21:42:30.609: INFO: Pod "downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.022873ms Jun 26 21:42:32.625: INFO: Pod "downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046019001s Jun 26 21:42:34.643: INFO: Pod "downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064007704s STEP: Saw pod success Jun 26 21:42:34.643: INFO: Pod "downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8" satisfied condition "success or failure" Jun 26 21:42:34.645: INFO: Trying to get logs from node jerma-worker pod downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8 container dapi-container: STEP: delete the pod Jun 26 21:42:34.683: INFO: Waiting for pod downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8 to disappear Jun 26 21:42:34.695: INFO: Pod downward-api-f0740121-5ef6-49c1-a796-a451b9a104b8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:42:34.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1499" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1811,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:42:34.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8223.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8223.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8223.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8223.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8223.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8223.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 21:42:42.876: INFO: DNS probes using dns-8223/dns-test-7f1ed490-2232-4157-b2f0-e71446b03e38 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:42:42.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8223" for this suite. • [SLOW TEST:8.219 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":99,"skipped":1817,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:42:42.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:43:43.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4852" for this suite. • [SLOW TEST:60.110 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:43:43.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-fa92571a-5119-4a3f-8d7c-299ec6c8ba88 STEP: Creating configMap with name cm-test-opt-upd-db7f15a4-3602-44c8-82b4-f9151ec1af6d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fa92571a-5119-4a3f-8d7c-299ec6c8ba88 STEP: Updating configmap cm-test-opt-upd-db7f15a4-3602-44c8-82b4-f9151ec1af6d STEP: Creating configMap with name cm-test-opt-create-cbd50cee-e317-4b14-9885-20bd1a9eb60e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:45:08.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7305" for this suite. • [SLOW TEST:85.213 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1886,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:45:08.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:45:08.759: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:45:10.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804708, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804708, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804708, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804708, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:45:13.836: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:45:14.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2467" for this suite. STEP: Destroying namespace "webhook-2467-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.076 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":102,"skipped":1893,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:45:15.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:45:15.369: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 26 21:45:17.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3771 create -f -' Jun 26 21:45:20.388: INFO: stderr: "" Jun 26 21:45:20.388: INFO: stdout: "e2e-test-crd-publish-openapi-3505-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 26 21:45:20.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3771 delete e2e-test-crd-publish-openapi-3505-crds test-cr' Jun 26 21:45:20.551: INFO: stderr: "" Jun 26 21:45:20.551: INFO: stdout: "e2e-test-crd-publish-openapi-3505-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 26 21:45:20.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3771 apply -f -' Jun 26 21:45:21.872: INFO: stderr: "" Jun 26 21:45:21.872: INFO: stdout: "e2e-test-crd-publish-openapi-3505-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 26 21:45:21.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3771 delete e2e-test-crd-publish-openapi-3505-crds test-cr' Jun 26 21:45:21.973: INFO: stderr: "" Jun 26 21:45:21.973: INFO: stdout: "e2e-test-crd-publish-openapi-3505-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 26 21:45:21.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3505-crds' Jun 26 21:45:22.234: INFO: stderr: "" Jun 26 21:45:22.234: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3505-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:45:25.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3771" for this suite. • [SLOW TEST:9.802 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":103,"skipped":1901,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:45:25.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-1315bce6-0866-4d80-8b1c-4a1160a10b72 STEP: Creating secret with name s-test-opt-upd-37220490-e599-4e20-a873-be117234313a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1315bce6-0866-4d80-8b1c-4a1160a10b72 STEP: Updating secret s-test-opt-upd-37220490-e599-4e20-a873-be117234313a STEP: Creating secret with name s-test-opt-create-6ca8fe16-7cd7-4397-b552-4017f91b0ed1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:45:33.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9390" for this suite. • [SLOW TEST:8.246 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1906,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:45:33.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jun 26 21:45:33.460: INFO: Waiting up to 5m0s for pod "var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17" in namespace "var-expansion-2691" to be "success or failure" Jun 26 21:45:33.463: INFO: Pod "var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17": Phase="Pending", Reason="", readiness=false. Elapsed: 3.334019ms Jun 26 21:45:35.468: INFO: Pod "var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007601842s Jun 26 21:45:37.470: INFO: Pod "var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010329389s STEP: Saw pod success Jun 26 21:45:37.470: INFO: Pod "var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17" satisfied condition "success or failure" Jun 26 21:45:37.472: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17 container dapi-container: STEP: delete the pod Jun 26 21:45:37.519: INFO: Waiting for pod var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17 to disappear Jun 26 21:45:37.541: INFO: Pod var-expansion-5c5fc5c0-c5c8-4157-90de-155bb13abc17 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:45:37.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2691" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1921,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:45:37.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jun 26 21:45:42.207: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1275 pod-service-account-33b48948-5642-4fb3-9a6a-cbb33204d330 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 26 21:45:42.420: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1275 pod-service-account-33b48948-5642-4fb3-9a6a-cbb33204d330 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 26 21:45:42.608: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1275 pod-service-account-33b48948-5642-4fb3-9a6a-cbb33204d330 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:45:42.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1275" for this suite. • [SLOW TEST:5.280 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":106,"skipped":1935,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:45:42.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6318 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6318 STEP: creating replication controller externalsvc in namespace services-6318 I0626 21:45:43.007156 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6318, replica count: 2 I0626 21:45:46.057557 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 21:45:49.057816 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 26 21:45:49.137: INFO: Creating new exec pod Jun 26 21:45:53.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6318 execpod7d4sk -- /bin/sh -x -c nslookup clusterip-service' Jun 26 21:45:53.585: INFO: stderr: "I0626 21:45:53.299838 1778 log.go:172] (0xc0001149a0) (0xc0006d3f40) Create stream\nI0626 21:45:53.299907 1778 log.go:172] (0xc0001149a0) (0xc0006d3f40) Stream added, broadcasting: 1\nI0626 21:45:53.304380 1778 log.go:172] (0xc0001149a0) Reply frame received for 1\nI0626 21:45:53.304407 1778 log.go:172] (0xc0001149a0) (0xc000690820) Create stream\nI0626 21:45:53.304418 1778 log.go:172] (0xc0001149a0) (0xc000690820) Stream added, broadcasting: 3\nI0626 21:45:53.305048 1778 log.go:172] (0xc0001149a0) Reply frame received for 3\nI0626 21:45:53.305071 1778 log.go:172] (0xc0001149a0) (0xc00049f5e0) Create stream\nI0626 21:45:53.305077 1778 log.go:172] (0xc0001149a0) (0xc00049f5e0) Stream added, broadcasting: 5\nI0626 21:45:53.305862 1778 log.go:172] (0xc0001149a0) Reply frame received for 5\nI0626 21:45:53.405755 1778 log.go:172] (0xc0001149a0) Data frame received for 5\nI0626 21:45:53.405782 1778 log.go:172] (0xc00049f5e0) (5) Data frame handling\nI0626 21:45:53.405802 1778 log.go:172] (0xc00049f5e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0626 21:45:53.570454 1778 log.go:172] (0xc0001149a0) Data frame received for 3\nI0626 21:45:53.570475 1778 log.go:172] (0xc000690820) (3) Data frame handling\nI0626 21:45:53.570496 1778 log.go:172] (0xc000690820) (3) Data frame sent\nI0626 21:45:53.571412 1778 log.go:172] (0xc0001149a0) Data frame received for 3\nI0626 21:45:53.571434 1778 log.go:172] (0xc000690820) (3) Data frame handling\nI0626 21:45:53.571450 1778 log.go:172] (0xc000690820) (3) Data frame sent\nI0626 21:45:53.571917 1778 log.go:172] (0xc0001149a0) Data frame received for 5\nI0626 21:45:53.571930 1778 log.go:172] (0xc00049f5e0) (5) Data frame handling\nI0626 21:45:53.572066 1778 log.go:172] (0xc0001149a0) Data frame received for 3\nI0626 21:45:53.572077 1778 log.go:172] (0xc000690820) (3) Data frame handling\nI0626 21:45:53.574816 1778 log.go:172] (0xc0001149a0) Data frame received for 1\nI0626 21:45:53.574840 1778 log.go:172] (0xc0006d3f40) (1) Data frame handling\nI0626 21:45:53.574858 1778 log.go:172] (0xc0006d3f40) (1) Data frame sent\nI0626 21:45:53.574887 1778 log.go:172] (0xc0001149a0) (0xc0006d3f40) Stream removed, broadcasting: 1\nI0626 21:45:53.574911 1778 log.go:172] (0xc0001149a0) Go away received\nI0626 21:45:53.575383 1778 log.go:172] (0xc0001149a0) (0xc0006d3f40) Stream removed, broadcasting: 1\nI0626 21:45:53.575413 1778 log.go:172] (0xc0001149a0) (0xc000690820) Stream removed, broadcasting: 3\nI0626 21:45:53.575429 1778 log.go:172] (0xc0001149a0) (0xc00049f5e0) Stream removed, broadcasting: 5\n" Jun 26 21:45:53.585: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6318.svc.cluster.local\tcanonical name = externalsvc.services-6318.svc.cluster.local.\nName:\texternalsvc.services-6318.svc.cluster.local\nAddress: 10.109.19.78\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6318, will wait for the garbage collector to delete the pods Jun 26 21:45:53.643: INFO: Deleting ReplicationController externalsvc took: 4.959083ms Jun 26 21:45:53.944: INFO: Terminating ReplicationController externalsvc pods took: 300.233984ms Jun 26 21:46:09.612: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:46:09.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6318" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.845 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":107,"skipped":1936,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:46:09.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jun 26 21:46:10.307: INFO: created pod pod-service-account-defaultsa Jun 26 21:46:10.307: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 26 21:46:10.353: INFO: created pod pod-service-account-mountsa Jun 26 21:46:10.353: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 26 21:46:10.388: INFO: created pod pod-service-account-nomountsa Jun 26 21:46:10.388: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 26 21:46:10.400: INFO: created pod pod-service-account-defaultsa-mountspec Jun 26 21:46:10.400: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 26 21:46:10.426: INFO: created pod pod-service-account-mountsa-mountspec Jun 26 21:46:10.426: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 26 21:46:10.492: INFO: created pod pod-service-account-nomountsa-mountspec Jun 26 21:46:10.492: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 26 21:46:10.503: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 26 21:46:10.503: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 26 21:46:10.543: INFO: created pod pod-service-account-mountsa-nomountspec Jun 26 21:46:10.543: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 26 21:46:10.643: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 26 21:46:10.643: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:46:10.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4238" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":108,"skipped":1958,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:46:10.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:46:29.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5532" for this suite. • [SLOW TEST:19.113 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":109,"skipped":1991,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:46:29.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:46:30.545: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:46:32.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804790, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804790, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804790, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804790, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:46:35.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:46:35.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6776" for this suite. STEP: Destroying namespace "webhook-6776-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.898 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":110,"skipped":2013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:46:35.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:46:35.827: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8d7f9c74-e757-465c-9cf8-5148c790c0d4" in namespace "security-context-test-91" to be "success or failure" Jun 26 21:46:35.831: INFO: Pod "alpine-nnp-false-8d7f9c74-e757-465c-9cf8-5148c790c0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716336ms Jun 26 21:46:37.835: INFO: Pod "alpine-nnp-false-8d7f9c74-e757-465c-9cf8-5148c790c0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007546953s Jun 26 21:46:39.839: INFO: Pod "alpine-nnp-false-8d7f9c74-e757-465c-9cf8-5148c790c0d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012014321s Jun 26 21:46:39.840: INFO: Pod "alpine-nnp-false-8d7f9c74-e757-465c-9cf8-5148c790c0d4" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:46:39.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-91" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":2040,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:46:39.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 26 21:46:39.937: INFO: Waiting up to 5m0s for pod "downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b" in namespace "downward-api-3101" to be "success or failure" Jun 26 21:46:39.942: INFO: Pod "downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.439163ms Jun 26 21:46:41.946: INFO: Pod "downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009183873s Jun 26 21:46:43.951: INFO: Pod "downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013600136s STEP: Saw pod success Jun 26 21:46:43.951: INFO: Pod "downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b" satisfied condition "success or failure" Jun 26 21:46:43.954: INFO: Trying to get logs from node jerma-worker pod downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b container dapi-container: STEP: delete the pod Jun 26 21:46:43.980: INFO: Waiting for pod downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b to disappear Jun 26 21:46:43.984: INFO: Pod downward-api-8965717e-3cd4-4327-af0c-7bfc9e90bd2b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:46:43.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3101" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":2069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:46:44.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0626 21:47:14.625758 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 21:47:14.625: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:47:14.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4610" for this suite. • [SLOW TEST:30.622 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":113,"skipped":2095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:47:14.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:47:15.087: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:47:17.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804835, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804835, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804835, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728804835, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:47:20.143: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:47:20.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9360" for this suite. STEP: Destroying namespace "webhook-9360-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.847 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":114,"skipped":2140,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:47:21.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6742.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6742.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6742.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 21:47:30.390: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.394: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.398: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.401: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.412: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.416: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.419: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.422: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:30.430: INFO: Lookups using dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local] Jun 26 21:47:35.435: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.438: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.442: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.444: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.453: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.456: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.459: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.463: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:35.469: INFO: Lookups using dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local] Jun 26 21:47:40.435: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.438: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.441: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.444: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.454: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.457: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.461: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.464: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:40.470: INFO: Lookups using dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local] Jun 26 21:47:45.440: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.443: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.446: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.449: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.456: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.459: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.461: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.464: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:45.469: INFO: Lookups using dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local] Jun 26 21:47:50.451: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.455: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.459: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.463: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.471: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.474: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.477: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.481: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:50.487: INFO: Lookups using dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local] Jun 26 21:47:55.434: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.437: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.441: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.444: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.454: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.458: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.462: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.464: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local from pod dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b: the server could not find the requested resource (get pods dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b) Jun 26 21:47:55.470: INFO: Lookups using dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6742.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6742.svc.cluster.local jessie_udp@dns-test-service-2.dns-6742.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6742.svc.cluster.local] Jun 26 21:48:00.491: INFO: DNS probes using dns-6742/dns-test-0f86ec46-5d81-4548-a13c-c33f2e71350b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:48:00.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6742" for this suite. • [SLOW TEST:39.528 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":115,"skipped":2146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:48:01.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-mpjg STEP: Creating a pod to test atomic-volume-subpath Jun 26 21:48:01.172: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mpjg" in namespace "subpath-6141" to be "success or failure" Jun 26 21:48:01.230: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Pending", Reason="", readiness=false. Elapsed: 58.378172ms Jun 26 21:48:03.234: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062655685s Jun 26 21:48:05.238: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066857101s Jun 26 21:48:07.251: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 6.07913908s Jun 26 21:48:09.255: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 8.082928421s Jun 26 21:48:11.259: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 10.087796426s Jun 26 21:48:13.264: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 12.09211621s Jun 26 21:48:15.268: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 14.096772452s Jun 26 21:48:17.273: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 16.101356649s Jun 26 21:48:19.277: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 18.105724766s Jun 26 21:48:21.281: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 20.109663246s Jun 26 21:48:23.286: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 22.114027308s Jun 26 21:48:25.290: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Running", Reason="", readiness=true. Elapsed: 24.118537945s Jun 26 21:48:27.295: INFO: Pod "pod-subpath-test-configmap-mpjg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.122873923s STEP: Saw pod success Jun 26 21:48:27.295: INFO: Pod "pod-subpath-test-configmap-mpjg" satisfied condition "success or failure" Jun 26 21:48:27.298: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-mpjg container test-container-subpath-configmap-mpjg: STEP: delete the pod Jun 26 21:48:27.331: INFO: Waiting for pod pod-subpath-test-configmap-mpjg to disappear Jun 26 21:48:27.335: INFO: Pod pod-subpath-test-configmap-mpjg no longer exists STEP: Deleting pod pod-subpath-test-configmap-mpjg Jun 26 21:48:27.335: INFO: Deleting pod "pod-subpath-test-configmap-mpjg" in namespace "subpath-6141" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:48:27.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6141" for this suite. • [SLOW TEST:26.336 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":116,"skipped":2274,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:48:27.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:48:27.410: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 26 21:48:29.467: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:48:30.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1545" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":117,"skipped":2282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:48:30.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 26 21:48:31.237: INFO: >>> kubeConfig: /root/.kube/config Jun 26 21:48:33.407: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:48:44.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7622" for this suite. • [SLOW TEST:14.161 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":118,"skipped":2336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:48:44.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jun 26 21:48:44.959: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:48:45.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7983" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":119,"skipped":2417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:48:45.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0626 21:49:25.520318 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 21:49:25.520: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:49:25.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3064" for this suite. • [SLOW TEST:40.481 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":120,"skipped":2440,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:49:25.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:49:25.600: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 26 21:49:25.606: INFO: Number of nodes with available pods: 0 Jun 26 21:49:25.606: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 26 21:49:25.679: INFO: Number of nodes with available pods: 0 Jun 26 21:49:25.679: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:26.685: INFO: Number of nodes with available pods: 0 Jun 26 21:49:26.685: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:27.683: INFO: Number of nodes with available pods: 0 Jun 26 21:49:27.684: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:28.685: INFO: Number of nodes with available pods: 0 Jun 26 21:49:28.685: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:29.684: INFO: Number of nodes with available pods: 1 Jun 26 21:49:29.684: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 26 21:49:29.720: INFO: Number of nodes with available pods: 1 Jun 26 21:49:29.720: INFO: Number of running nodes: 0, number of available pods: 1 Jun 26 21:49:30.732: INFO: Number of nodes with available pods: 0 Jun 26 21:49:30.732: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 26 21:49:30.763: INFO: Number of nodes with available pods: 0 Jun 26 21:49:30.763: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:31.781: INFO: Number of nodes with available pods: 0 Jun 26 21:49:31.781: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:32.769: INFO: Number of nodes with available pods: 0 Jun 26 21:49:32.769: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:33.767: INFO: Number of nodes with available pods: 0 Jun 26 21:49:33.767: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:34.768: INFO: Number of nodes with available pods: 0 Jun 26 21:49:34.768: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:35.835: INFO: Number of nodes with available pods: 0 Jun 26 21:49:35.835: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:36.768: INFO: Number of nodes with available pods: 0 Jun 26 21:49:36.768: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:37.767: INFO: Number of nodes with available pods: 0 Jun 26 21:49:37.768: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:38.768: INFO: Number of nodes with available pods: 0 Jun 26 21:49:38.768: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:39.768: INFO: Number of nodes with available pods: 0 Jun 26 21:49:39.768: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:40.782: INFO: Number of nodes with available pods: 0 Jun 26 21:49:40.782: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:41.768: INFO: Number of nodes with available pods: 0 Jun 26 21:49:41.768: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:42.768: INFO: Number of nodes with available pods: 1 Jun 26 21:49:42.768: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7416, will wait for the garbage collector to delete the pods Jun 26 21:49:42.834: INFO: Deleting DaemonSet.extensions daemon-set took: 6.840599ms Jun 26 21:49:43.135: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.31033ms Jun 26 21:49:49.538: INFO: Number of nodes with available pods: 0 Jun 26 21:49:49.538: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 21:49:49.541: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7416/daemonsets","resourceVersion":"27541544"},"items":null} Jun 26 21:49:49.544: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7416/pods","resourceVersion":"27541544"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:49:49.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7416" for this suite. • [SLOW TEST:24.055 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":121,"skipped":2444,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:49:49.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 26 21:49:49.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:49.704: INFO: Number of nodes with available pods: 0 Jun 26 21:49:49.704: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:50.709: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:50.787: INFO: Number of nodes with available pods: 0 Jun 26 21:49:50.787: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:51.723: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:51.726: INFO: Number of nodes with available pods: 0 Jun 26 21:49:51.726: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:52.723: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:52.788: INFO: Number of nodes with available pods: 0 Jun 26 21:49:52.788: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:53.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:53.717: INFO: Number of nodes with available pods: 1 Jun 26 21:49:53.717: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 21:49:54.716: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:54.723: INFO: Number of nodes with available pods: 2 Jun 26 21:49:54.723: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 26 21:49:54.770: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:54.783: INFO: Number of nodes with available pods: 1 Jun 26 21:49:54.783: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:55.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:55.951: INFO: Number of nodes with available pods: 1 Jun 26 21:49:55.951: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:56.790: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:56.800: INFO: Number of nodes with available pods: 1 Jun 26 21:49:56.800: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:57.788: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:57.792: INFO: Number of nodes with available pods: 1 Jun 26 21:49:57.792: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:58.789: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:58.793: INFO: Number of nodes with available pods: 1 Jun 26 21:49:58.793: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:49:59.789: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:49:59.793: INFO: Number of nodes with available pods: 1 Jun 26 21:49:59.793: INFO: Node jerma-worker is running more than one daemon pod Jun 26 21:50:00.788: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 21:50:00.792: INFO: Number of nodes with available pods: 2 Jun 26 21:50:00.792: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5925, will wait for the garbage collector to delete the pods Jun 26 21:50:00.852: INFO: Deleting DaemonSet.extensions daemon-set took: 4.793265ms Jun 26 21:50:01.253: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.487194ms Jun 26 21:50:05.656: INFO: Number of nodes with available pods: 0 Jun 26 21:50:05.657: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 21:50:05.660: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5925/daemonsets","resourceVersion":"27541663"},"items":null} Jun 26 21:50:05.663: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5925/pods","resourceVersion":"27541663"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:50:05.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5925" for this suite. • [SLOW TEST:16.099 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":122,"skipped":2447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:50:05.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-df7f84a5-8abc-421c-8019-277428e5e1b2 STEP: Creating a pod to test consume secrets Jun 26 21:50:05.765: INFO: Waiting up to 5m0s for pod "pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c" in namespace "secrets-4365" to be "success or failure" Jun 26 21:50:05.796: INFO: Pod "pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.236436ms Jun 26 21:50:07.799: INFO: Pod "pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033398532s Jun 26 21:50:09.804: INFO: Pod "pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03827423s STEP: Saw pod success Jun 26 21:50:09.804: INFO: Pod "pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c" satisfied condition "success or failure" Jun 26 21:50:09.807: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c container secret-volume-test: STEP: delete the pod Jun 26 21:50:09.862: INFO: Waiting for pod pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c to disappear Jun 26 21:50:09.871: INFO: Pod pod-secrets-c87e79d1-9794-4cf4-9950-776c2832096c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:50:09.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4365" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2470,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:50:09.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b7429b66-244d-4108-a652-e7693a308a6d STEP: Creating a pod to test consume secrets Jun 26 21:50:09.983: INFO: Waiting up to 5m0s for pod "pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c" in namespace "secrets-5314" to be "success or failure" Jun 26 21:50:09.986: INFO: Pod "pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.514022ms Jun 26 21:50:11.990: INFO: Pod "pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007378587s Jun 26 21:50:13.993: INFO: Pod "pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010730229s STEP: Saw pod success Jun 26 21:50:13.993: INFO: Pod "pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c" satisfied condition "success or failure" Jun 26 21:50:13.996: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c container secret-volume-test: STEP: delete the pod Jun 26 21:50:14.047: INFO: Waiting for pod pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c to disappear Jun 26 21:50:14.052: INFO: Pod pod-secrets-7dddbd33-c39d-418b-b328-f2f05e95421c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:50:14.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5314" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2470,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:50:14.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c5370d95-d6e1-46c8-a8fe-babc8c84710a STEP: Creating a pod to test consume configMaps Jun 26 21:50:14.145: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6" in namespace "projected-9887" to be "success or failure" Jun 26 21:50:14.201: INFO: Pod "pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 56.340211ms Jun 26 21:50:16.205: INFO: Pod "pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060743493s Jun 26 21:50:18.210: INFO: Pod "pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065358612s STEP: Saw pod success Jun 26 21:50:18.210: INFO: Pod "pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6" satisfied condition "success or failure" Jun 26 21:50:18.213: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6 container projected-configmap-volume-test: STEP: delete the pod Jun 26 21:50:18.240: INFO: Waiting for pod pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6 to disappear Jun 26 21:50:18.308: INFO: Pod pod-projected-configmaps-94d8a54c-2d58-4ae0-ac88-69f61e68e2f6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:50:18.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9887" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2470,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:50:18.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-w7xt STEP: Creating a pod to test atomic-volume-subpath Jun 26 21:50:18.464: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-w7xt" in namespace "subpath-7355" to be "success or failure" Jun 26 21:50:18.473: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949398ms Jun 26 21:50:20.501: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036365113s Jun 26 21:50:22.505: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 4.040806328s Jun 26 21:50:24.509: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 6.044379427s Jun 26 21:50:26.513: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 8.048710011s Jun 26 21:50:28.517: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 10.052873372s Jun 26 21:50:30.522: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 12.057413564s Jun 26 21:50:32.529: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 14.06490584s Jun 26 21:50:34.534: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 16.069391105s Jun 26 21:50:36.538: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 18.074066307s Jun 26 21:50:38.543: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 20.078548618s Jun 26 21:50:40.547: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Running", Reason="", readiness=true. Elapsed: 22.082826863s Jun 26 21:50:42.552: INFO: Pod "pod-subpath-test-downwardapi-w7xt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087455026s STEP: Saw pod success Jun 26 21:50:42.552: INFO: Pod "pod-subpath-test-downwardapi-w7xt" satisfied condition "success or failure" Jun 26 21:50:42.555: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-w7xt container test-container-subpath-downwardapi-w7xt: STEP: delete the pod Jun 26 21:50:42.590: INFO: Waiting for pod pod-subpath-test-downwardapi-w7xt to disappear Jun 26 21:50:42.620: INFO: Pod pod-subpath-test-downwardapi-w7xt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-w7xt Jun 26 21:50:42.621: INFO: Deleting pod "pod-subpath-test-downwardapi-w7xt" in namespace "subpath-7355" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:50:42.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7355" for this suite. • [SLOW TEST:24.316 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":126,"skipped":2484,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:50:42.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-496a6f94-f81c-4395-ba48-31806172b17f STEP: Creating a pod to test consume configMaps Jun 26 21:50:42.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df" in namespace "configmap-2898" to be "success or failure" Jun 26 21:50:42.758: INFO: Pod "pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df": Phase="Pending", Reason="", readiness=false. Elapsed: 45.109899ms Jun 26 21:50:44.761: INFO: Pod "pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048455646s Jun 26 21:50:46.771: INFO: Pod "pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0582399s STEP: Saw pod success Jun 26 21:50:46.771: INFO: Pod "pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df" satisfied condition "success or failure" Jun 26 21:50:46.774: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df container configmap-volume-test: STEP: delete the pod Jun 26 21:50:46.797: INFO: Waiting for pod pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df to disappear Jun 26 21:50:46.801: INFO: Pod pod-configmaps-4448f9d2-c114-48d1-a3e7-1302bab889df no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:50:46.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2898" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2497,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:50:46.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9256fa80-7e7d-4d62-aec2-df6377111411 STEP: Creating a pod to test consume configMaps Jun 26 21:50:46.911: INFO: Waiting up to 5m0s for pod "pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4" in namespace "configmap-7170" to be "success or failure" Jun 26 21:50:46.930: INFO: Pod "pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.68955ms Jun 26 21:50:48.934: INFO: Pod "pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022666219s Jun 26 21:50:50.938: INFO: Pod "pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4": Phase="Running", Reason="", readiness=true. Elapsed: 4.026839649s Jun 26 21:50:52.942: INFO: Pod "pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030608449s STEP: Saw pod success Jun 26 21:50:52.942: INFO: Pod "pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4" satisfied condition "success or failure" Jun 26 21:50:52.944: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4 container configmap-volume-test: STEP: delete the pod Jun 26 21:50:52.959: INFO: Waiting for pod pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4 to disappear Jun 26 21:50:52.963: INFO: Pod pod-configmaps-cad7c091-3e97-497d-84cc-aadacad9a0b4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:50:52.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7170" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2512,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:50:52.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3024 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3024 STEP: Deleting pre-stop pod Jun 26 21:51:06.109: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:51:06.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3024" for this suite. • [SLOW TEST:13.198 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":129,"skipped":2517,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:51:06.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 26 21:51:06.747: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 26 21:51:08.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805066, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805066, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805067, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805066, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:51:11.782: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:51:11.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:51:13.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1977" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.945 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":130,"skipped":2525,"failed":0} [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:51:13.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 26 21:51:17.736: INFO: Successfully updated pod "annotationupdatef826f437-242b-4126-9fc9-7ce048d0afdb" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:51:19.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9777" for this suite. • [SLOW TEST:6.675 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2525,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:51:19.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:51:19.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1852" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":132,"skipped":2540,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:51:19.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 26 21:51:24.454: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b1710bdc-7c69-410e-ba9b-3236d34252f5" Jun 26 21:51:24.454: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b1710bdc-7c69-410e-ba9b-3236d34252f5" in namespace "pods-7752" to be "terminated due to deadline exceeded" Jun 26 21:51:24.461: INFO: Pod "pod-update-activedeadlineseconds-b1710bdc-7c69-410e-ba9b-3236d34252f5": Phase="Running", Reason="", readiness=true. Elapsed: 6.701553ms Jun 26 21:51:26.465: INFO: Pod "pod-update-activedeadlineseconds-b1710bdc-7c69-410e-ba9b-3236d34252f5": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010728845s Jun 26 21:51:26.465: INFO: Pod "pod-update-activedeadlineseconds-b1710bdc-7c69-410e-ba9b-3236d34252f5" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:51:26.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7752" for this suite. • [SLOW TEST:6.618 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2549,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:51:26.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:52:00.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1744" for this suite. • [SLOW TEST:33.878 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:52:00.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:52:00.410: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 26 21:52:00.429: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 26 21:52:05.432: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 26 21:52:05.432: INFO: Creating deployment "test-rolling-update-deployment" Jun 26 21:52:05.449: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 26 21:52:05.467: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 26 21:52:07.474: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 26 21:52:07.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805125, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805125, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805125, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805125, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:52:09.480: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 26 21:52:09.490: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3963 /apis/apps/v1/namespaces/deployment-3963/deployments/test-rolling-update-deployment 27511e63-8eb9-484a-a25a-c5821acbc6fb 27542502 1 2020-06-26 21:52:05 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a83cd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-26 21:52:05 +0000 UTC,LastTransitionTime:2020-06-26 21:52:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-06-26 21:52:08 +0000 UTC,LastTransitionTime:2020-06-26 21:52:05 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 26 21:52:09.493: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-3963 /apis/apps/v1/namespaces/deployment-3963/replicasets/test-rolling-update-deployment-67cf4f6444 8872e86f-42c2-44ec-914d-367ed3bf4a94 27542491 1 2020-06-26 21:52:05 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 27511e63-8eb9-484a-a25a-c5821acbc6fb 0xc004e04187 0xc004e04188}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004e041f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 26 21:52:09.493: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 26 21:52:09.493: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3963 /apis/apps/v1/namespaces/deployment-3963/replicasets/test-rolling-update-controller 6a5d57af-eecf-4bfd-a2e6-7a914e6cfb5d 27542501 2 2020-06-26 21:52:00 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 27511e63-8eb9-484a-a25a-c5821acbc6fb 0xc004e040b7 0xc004e040b8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004e04118 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 21:52:09.496: INFO: Pod "test-rolling-update-deployment-67cf4f6444-ssrkb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-ssrkb test-rolling-update-deployment-67cf4f6444- deployment-3963 /api/v1/namespaces/deployment-3963/pods/test-rolling-update-deployment-67cf4f6444-ssrkb e6fc880d-f6f8-427f-b201-36747a508984 27542490 0 2020-06-26 21:52:05 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 8872e86f-42c2-44ec-914d-367ed3bf4a94 0xc004e170d7 0xc004e170d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qnxbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qnxbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qnxbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:52:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:52:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:52:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:52:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.223,StartTime:2020-06-26 21:52:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:52:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0fc06c8f72eaf9a6a1144dc4339eefbe696c09a82b596b48ac4142a55fadcc48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:52:09.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3963" for this suite. • [SLOW TEST:9.147 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":135,"skipped":2588,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:52:09.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8121.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8121.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8121.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8121.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 21:52:15.707: INFO: DNS probes using dns-8121/dns-test-93256f9f-c268-4f3c-91eb-7b396b306ff2 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:52:16.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8121" for this suite. • [SLOW TEST:6.948 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":136,"skipped":2591,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:52:16.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2237, will wait for the garbage collector to delete the pods Jun 26 21:52:22.650: INFO: Deleting Job.batch foo took: 7.604919ms Jun 26 21:52:23.050: INFO: Terminating Job.batch foo pods took: 400.333593ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:52:59.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2237" for this suite. • [SLOW TEST:43.131 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":137,"skipped":2600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:52:59.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9f0ce47c-bd06-4681-8a6e-fb4836cec95f STEP: Creating a pod to test consume configMaps Jun 26 21:52:59.669: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57" in namespace "projected-8009" to be "success or failure" Jun 26 21:52:59.707: INFO: Pod "pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57": Phase="Pending", Reason="", readiness=false. Elapsed: 37.513855ms Jun 26 21:53:01.711: INFO: Pod "pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041651374s Jun 26 21:53:03.715: INFO: Pod "pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045566364s STEP: Saw pod success Jun 26 21:53:03.715: INFO: Pod "pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57" satisfied condition "success or failure" Jun 26 21:53:03.718: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57 container projected-configmap-volume-test: STEP: delete the pod Jun 26 21:53:03.774: INFO: Waiting for pod pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57 to disappear Jun 26 21:53:03.781: INFO: Pod pod-projected-configmaps-def4701d-70de-4fe2-99ca-dfb0d25cbb57 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:53:03.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8009" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2630,"failed":0} ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:53:03.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:53:03.859: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-18608a3d-eeac-4c9e-a42a-fa3237d90f2a" in namespace "security-context-test-3894" to be "success or failure" Jun 26 21:53:03.870: INFO: Pod "busybox-readonly-false-18608a3d-eeac-4c9e-a42a-fa3237d90f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.627872ms Jun 26 21:53:05.964: INFO: Pod "busybox-readonly-false-18608a3d-eeac-4c9e-a42a-fa3237d90f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104459064s Jun 26 21:53:07.968: INFO: Pod "busybox-readonly-false-18608a3d-eeac-4c9e-a42a-fa3237d90f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108406856s Jun 26 21:53:07.968: INFO: Pod "busybox-readonly-false-18608a3d-eeac-4c9e-a42a-fa3237d90f2a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:53:07.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3894" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2630,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:53:07.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 21:53:08.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2282' Jun 26 21:53:08.159: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 26 21:53:08.160: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jun 26 21:53:08.182: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-7fcwd] Jun 26 21:53:08.182: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-7fcwd" in namespace "kubectl-2282" to be "running and ready" Jun 26 21:53:08.208: INFO: Pod "e2e-test-httpd-rc-7fcwd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.000201ms Jun 26 21:53:10.212: INFO: Pod "e2e-test-httpd-rc-7fcwd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029004574s Jun 26 21:53:12.215: INFO: Pod "e2e-test-httpd-rc-7fcwd": Phase="Running", Reason="", readiness=true. Elapsed: 4.032874339s Jun 26 21:53:12.215: INFO: Pod "e2e-test-httpd-rc-7fcwd" satisfied condition "running and ready" Jun 26 21:53:12.215: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-7fcwd] Jun 26 21:53:12.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2282' Jun 26 21:53:12.339: INFO: stderr: "" Jun 26 21:53:12.339: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.226. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.226. Set the 'ServerName' directive globally to suppress this message\n[Fri Jun 26 21:53:10.641247 2020] [mpm_event:notice] [pid 1:tid 140175496076136] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jun 26 21:53:10.641316 2020] [core:notice] [pid 1:tid 140175496076136] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Jun 26 21:53:12.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2282' Jun 26 21:53:12.443: INFO: stderr: "" Jun 26 21:53:12.443: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:53:12.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2282" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":140,"skipped":2638,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:53:12.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 21:53:12.519: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d" in namespace "downward-api-4675" to be "success or failure" Jun 26 21:53:12.523: INFO: Pod "downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.795789ms Jun 26 21:53:14.527: INFO: Pod "downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008205935s Jun 26 21:53:16.532: INFO: Pod "downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012690798s STEP: Saw pod success Jun 26 21:53:16.532: INFO: Pod "downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d" satisfied condition "success or failure" Jun 26 21:53:16.535: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d container client-container: STEP: delete the pod Jun 26 21:53:16.567: INFO: Waiting for pod downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d to disappear Jun 26 21:53:16.677: INFO: Pod downwardapi-volume-dbe91092-2483-48c6-9bee-19497dd0414d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:53:16.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4675" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2652,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:53:16.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:53:16.817: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 26 21:53:21.819: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 26 21:53:21.820: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 26 21:53:23.824: INFO: Creating deployment "test-rollover-deployment" Jun 26 21:53:23.835: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 26 21:53:25.842: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 26 21:53:25.847: INFO: Ensure that both replica sets have 1 created replica Jun 26 21:53:25.852: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 26 21:53:25.858: INFO: Updating deployment test-rollover-deployment Jun 26 21:53:25.858: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 26 21:53:27.864: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 26 21:53:27.870: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 26 21:53:27.876: INFO: all replica sets need to contain the pod-template-hash label Jun 26 21:53:27.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805206, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:29.885: INFO: all replica sets need to contain the pod-template-hash label Jun 26 21:53:29.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805206, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:31.885: INFO: all replica sets need to contain the pod-template-hash label Jun 26 21:53:31.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805210, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:33.885: INFO: all replica sets need to contain the pod-template-hash label Jun 26 21:53:33.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805210, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:35.885: INFO: all replica sets need to contain the pod-template-hash label Jun 26 21:53:35.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805210, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:37.885: INFO: all replica sets need to contain the pod-template-hash label Jun 26 21:53:37.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805210, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:39.884: INFO: all replica sets need to contain the pod-template-hash label Jun 26 21:53:39.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805210, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805203, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:41.885: INFO: Jun 26 21:53:41.885: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 26 21:53:41.893: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8663 /apis/apps/v1/namespaces/deployment-8663/deployments/test-rollover-deployment b882877d-69f9-4d6c-a22c-67a851fc72f5 27543081 2 2020-06-26 21:53:23 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003728ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-26 21:53:23 +0000 UTC,LastTransitionTime:2020-06-26 21:53:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-06-26 21:53:40 +0000 UTC,LastTransitionTime:2020-06-26 21:53:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 26 21:53:41.896: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-8663 /apis/apps/v1/namespaces/deployment-8663/replicasets/test-rollover-deployment-574d6dfbff fe91c5a7-96f6-46a0-bffe-5ca5aa69fed6 27543070 2 2020-06-26 21:53:25 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b882877d-69f9-4d6c-a22c-67a851fc72f5 0xc002844df7 0xc002844df8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002844ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 26 21:53:41.896: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 26 21:53:41.896: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8663 /apis/apps/v1/namespaces/deployment-8663/replicasets/test-rollover-controller 75aaf3f9-39d9-401e-949a-3637cecf507e 27543079 2 2020-06-26 21:53:16 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b882877d-69f9-4d6c-a22c-67a851fc72f5 0xc002844b3f 0xc002844b60}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002844ce8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 21:53:41.896: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-8663 /apis/apps/v1/namespaces/deployment-8663/replicasets/test-rollover-deployment-f6c94f66c cfc2f85e-31a9-434c-8665-ba8832649f23 27543019 2 2020-06-26 21:53:23 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b882877d-69f9-4d6c-a22c-67a851fc72f5 0xc002844f60 0xc002844f61}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002844ff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 21:53:41.899: INFO: Pod "test-rollover-deployment-574d6dfbff-c7fq6" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-c7fq6 test-rollover-deployment-574d6dfbff- deployment-8663 /api/v1/namespaces/deployment-8663/pods/test-rollover-deployment-574d6dfbff-c7fq6 d3d746d8-8435-44cc-b177-c84759c3a990 27543038 0 2020-06-26 21:53:25 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff fe91c5a7-96f6-46a0-bffe-5ca5aa69fed6 0xc002845bc7 0xc002845bc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7dfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7dfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7dfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 21:53:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.228,StartTime:2020-06-26 21:53:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 21:53:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8750ebde192de829b8b59474f572de3b7edd99083342099969f26dcf971f396a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:53:41.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8663" for this suite. • [SLOW TEST:25.219 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":142,"skipped":2655,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:53:41.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0626 21:53:54.184703 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 21:53:54.184: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:53:54.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4223" for this suite. • [SLOW TEST:12.414 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":143,"skipped":2669,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:53:54.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:53:54.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4990" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":144,"skipped":2676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:53:54.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:53:55.423: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:53:57.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 21:53:59.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805235, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:54:02.483: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:54:02.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7592" for this suite. STEP: Destroying namespace "webhook-7592-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.117 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":145,"skipped":2702,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:54:02.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:54:03.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1733" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":146,"skipped":2704,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:54:03.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jun 26 21:54:03.234: INFO: Waiting up to 5m0s for pod "client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12" in namespace "containers-1041" to be "success or failure" Jun 26 21:54:03.238: INFO: Pod "client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12": Phase="Pending", Reason="", readiness=false. Elapsed: 3.841482ms Jun 26 21:54:05.250: INFO: Pod "client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016005618s Jun 26 21:54:07.254: INFO: Pod "client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12": Phase="Running", Reason="", readiness=true. Elapsed: 4.020339597s Jun 26 21:54:09.257: INFO: Pod "client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023111262s STEP: Saw pod success Jun 26 21:54:09.257: INFO: Pod "client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12" satisfied condition "success or failure" Jun 26 21:54:09.259: INFO: Trying to get logs from node jerma-worker pod client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12 container test-container: STEP: delete the pod Jun 26 21:54:09.326: INFO: Waiting for pod client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12 to disappear Jun 26 21:54:09.334: INFO: Pod client-containers-c5b3e62d-1fb7-4aa8-b63a-df4df1010a12 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:54:09.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1041" for this suite. • [SLOW TEST:6.247 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2711,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:54:09.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1276.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1276.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 21:54:15.491: INFO: DNS probes using dns-test-8222523d-a1d3-4233-90be-43ae7152e0ee succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1276.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1276.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 21:54:21.603: INFO: File wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains '' instead of 'bar.example.com.' Jun 26 21:54:21.606: INFO: File jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:21.606: INFO: Lookups using dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 failed for: [wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local] Jun 26 21:54:26.611: INFO: File wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:26.615: INFO: File jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:26.615: INFO: Lookups using dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 failed for: [wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local] Jun 26 21:54:31.691: INFO: File wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:31.713: INFO: File jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:31.713: INFO: Lookups using dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 failed for: [wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local] Jun 26 21:54:36.611: INFO: File wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:36.615: INFO: File jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:36.615: INFO: Lookups using dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 failed for: [wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local] Jun 26 21:54:41.611: INFO: File wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:41.616: INFO: File jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local from pod dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 21:54:41.616: INFO: Lookups using dns-1276/dns-test-83b9c078-308b-403f-a5d9-86a779435b91 failed for: [wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local] Jun 26 21:54:46.615: INFO: DNS probes using dns-test-83b9c078-308b-403f-a5d9-86a779435b91 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1276.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1276.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1276.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1276.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 21:54:53.260: INFO: DNS probes using dns-test-06c9a6da-b713-4e8f-a79a-098011a80c2b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:54:53.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1276" for this suite. • [SLOW TEST:43.993 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":148,"skipped":2730,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:54:53.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 in namespace container-probe-7341 Jun 26 21:54:57.737: INFO: Started pod liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 in namespace container-probe-7341 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 21:54:57.740: INFO: Initial restart count of pod liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 is 0 Jun 26 21:55:13.782: INFO: Restart count of pod container-probe-7341/liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 is now 1 (16.041292587s elapsed) Jun 26 21:55:35.844: INFO: Restart count of pod container-probe-7341/liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 is now 2 (38.103659267s elapsed) Jun 26 21:55:53.880: INFO: Restart count of pod container-probe-7341/liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 is now 3 (56.139891741s elapsed) Jun 26 21:56:13.967: INFO: Restart count of pod container-probe-7341/liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 is now 4 (1m16.226208867s elapsed) Jun 26 21:57:28.153: INFO: Restart count of pod container-probe-7341/liveness-1ad32b9a-cae4-47f4-930f-84fce1be9cc6 is now 5 (2m30.412525827s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:57:28.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7341" for this suite. • [SLOW TEST:154.860 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2752,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:57:28.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jun 26 21:57:28.270: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:57:44.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-608" for this suite. • [SLOW TEST:16.616 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":150,"skipped":2767,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:57:44.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:57:48.928: INFO: Waiting up to 5m0s for pod "client-envvars-d28d9146-cd19-417e-ad85-a59c35202198" in namespace "pods-5357" to be "success or failure" Jun 26 21:57:48.988: INFO: Pod "client-envvars-d28d9146-cd19-417e-ad85-a59c35202198": Phase="Pending", Reason="", readiness=false. Elapsed: 59.497559ms Jun 26 21:57:51.028: INFO: Pod "client-envvars-d28d9146-cd19-417e-ad85-a59c35202198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100329523s Jun 26 21:57:53.033: INFO: Pod "client-envvars-d28d9146-cd19-417e-ad85-a59c35202198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105423827s STEP: Saw pod success Jun 26 21:57:53.034: INFO: Pod "client-envvars-d28d9146-cd19-417e-ad85-a59c35202198" satisfied condition "success or failure" Jun 26 21:57:53.037: INFO: Trying to get logs from node jerma-worker pod client-envvars-d28d9146-cd19-417e-ad85-a59c35202198 container env3cont: STEP: delete the pod Jun 26 21:57:53.064: INFO: Waiting for pod client-envvars-d28d9146-cd19-417e-ad85-a59c35202198 to disappear Jun 26 21:57:53.068: INFO: Pod client-envvars-d28d9146-cd19-417e-ad85-a59c35202198 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:57:53.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5357" for this suite. • [SLOW TEST:8.264 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:57:53.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 21:57:53.167: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e93f32b3-6d99-4702-8c83-d9e0cd21af18" in namespace "security-context-test-1922" to be "success or failure" Jun 26 21:57:53.170: INFO: Pod "busybox-user-65534-e93f32b3-6d99-4702-8c83-d9e0cd21af18": Phase="Pending", Reason="", readiness=false. Elapsed: 3.178247ms Jun 26 21:57:55.174: INFO: Pod "busybox-user-65534-e93f32b3-6d99-4702-8c83-d9e0cd21af18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006676461s Jun 26 21:57:57.178: INFO: Pod "busybox-user-65534-e93f32b3-6d99-4702-8c83-d9e0cd21af18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011296675s Jun 26 21:57:57.178: INFO: Pod "busybox-user-65534-e93f32b3-6d99-4702-8c83-d9e0cd21af18" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:57:57.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1922" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2846,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:57:57.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 21:57:57.993: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 21:58:00.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805478, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805478, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805478, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805477, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 21:58:03.048: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:58:03.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8382" for this suite. STEP: Destroying namespace "webhook-8382-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.045 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":153,"skipped":2866,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:58:03.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 26 21:58:03.307: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8701 /api/v1/namespaces/watch-8701/configmaps/e2e-watch-test-label-changed cd3d17bc-6129-40fc-8084-6f7abeca69a1 27544494 0 2020-06-26 21:58:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 26 21:58:03.307: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8701 /api/v1/namespaces/watch-8701/configmaps/e2e-watch-test-label-changed cd3d17bc-6129-40fc-8084-6f7abeca69a1 27544495 0 2020-06-26 21:58:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 26 21:58:03.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8701 /api/v1/namespaces/watch-8701/configmaps/e2e-watch-test-label-changed cd3d17bc-6129-40fc-8084-6f7abeca69a1 27544496 0 2020-06-26 21:58:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 26 21:58:13.360: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8701 /api/v1/namespaces/watch-8701/configmaps/e2e-watch-test-label-changed cd3d17bc-6129-40fc-8084-6f7abeca69a1 27544549 0 2020-06-26 21:58:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 26 21:58:13.360: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8701 /api/v1/namespaces/watch-8701/configmaps/e2e-watch-test-label-changed cd3d17bc-6129-40fc-8084-6f7abeca69a1 27544550 0 2020-06-26 21:58:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 26 21:58:13.360: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8701 /api/v1/namespaces/watch-8701/configmaps/e2e-watch-test-label-changed cd3d17bc-6129-40fc-8084-6f7abeca69a1 27544551 0 2020-06-26 21:58:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:58:13.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8701" for this suite. • [SLOW TEST:10.136 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":154,"skipped":2883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:58:13.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-33d474f2-1357-47a8-a03f-39b92bad9da0 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-33d474f2-1357-47a8-a03f-39b92bad9da0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:59:39.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6061" for this suite. • [SLOW TEST:86.605 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2912,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:59:39.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:59:44.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5004" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":156,"skipped":2920,"failed":0} ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:59:44.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-7584/configmap-test-b4d87a99-ff1a-405f-a061-c3b875cc9512 STEP: Creating a pod to test consume configMaps Jun 26 21:59:44.265: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d" in namespace "configmap-7584" to be "success or failure" Jun 26 21:59:44.666: INFO: Pod "pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d": Phase="Pending", Reason="", readiness=false. Elapsed: 400.603819ms Jun 26 21:59:46.670: INFO: Pod "pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404988138s Jun 26 21:59:48.674: INFO: Pod "pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.409146239s STEP: Saw pod success Jun 26 21:59:48.674: INFO: Pod "pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d" satisfied condition "success or failure" Jun 26 21:59:48.677: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d container env-test: STEP: delete the pod Jun 26 21:59:48.734: INFO: Waiting for pod pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d to disappear Jun 26 21:59:48.802: INFO: Pod pod-configmaps-dc92887d-c8f5-4b35-ba26-125e7f74ba6d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 21:59:48.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7584" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2920,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 21:59:48.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 26 21:59:49.548: INFO: Pod name wrapped-volume-race-a01f964a-85e1-43f6-a1f8-8b3a2f481ff7: Found 0 pods out of 5 Jun 26 21:59:54.571: INFO: Pod name wrapped-volume-race-a01f964a-85e1-43f6-a1f8-8b3a2f481ff7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a01f964a-85e1-43f6-a1f8-8b3a2f481ff7 in namespace emptydir-wrapper-588, will wait for the garbage collector to delete the pods Jun 26 22:00:06.773: INFO: Deleting ReplicationController wrapped-volume-race-a01f964a-85e1-43f6-a1f8-8b3a2f481ff7 took: 18.243037ms Jun 26 22:00:07.173: INFO: Terminating ReplicationController wrapped-volume-race-a01f964a-85e1-43f6-a1f8-8b3a2f481ff7 pods took: 400.500029ms STEP: Creating RC which spawns configmap-volume pods Jun 26 22:00:20.300: INFO: Pod name wrapped-volume-race-cfaf6dbe-2683-4fe0-a994-c10a184e2cf2: Found 0 pods out of 5 Jun 26 22:00:25.308: INFO: Pod name wrapped-volume-race-cfaf6dbe-2683-4fe0-a994-c10a184e2cf2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cfaf6dbe-2683-4fe0-a994-c10a184e2cf2 in namespace emptydir-wrapper-588, will wait for the garbage collector to delete the pods Jun 26 22:00:39.389: INFO: Deleting ReplicationController wrapped-volume-race-cfaf6dbe-2683-4fe0-a994-c10a184e2cf2 took: 7.689311ms Jun 26 22:00:39.690: INFO: Terminating ReplicationController wrapped-volume-race-cfaf6dbe-2683-4fe0-a994-c10a184e2cf2 pods took: 300.281188ms STEP: Creating RC which spawns configmap-volume pods Jun 26 22:00:49.762: INFO: Pod name wrapped-volume-race-027e0e12-100c-43ea-bb92-a55792c85449: Found 0 pods out of 5 Jun 26 22:00:54.770: INFO: Pod name wrapped-volume-race-027e0e12-100c-43ea-bb92-a55792c85449: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-027e0e12-100c-43ea-bb92-a55792c85449 in namespace emptydir-wrapper-588, will wait for the garbage collector to delete the pods Jun 26 22:01:06.904: INFO: Deleting ReplicationController wrapped-volume-race-027e0e12-100c-43ea-bb92-a55792c85449 took: 12.227776ms Jun 26 22:01:07.304: INFO: Terminating ReplicationController wrapped-volume-race-027e0e12-100c-43ea-bb92-a55792c85449 pods took: 400.264845ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:21.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-588" for this suite. • [SLOW TEST:92.400 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":158,"skipped":2927,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:21.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-955aa1c8-40e2-4cc0-a8f3-93140d5ce8d6 STEP: Creating a pod to test consume configMaps Jun 26 22:01:21.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19" in namespace "configmap-8571" to be "success or failure" Jun 26 22:01:21.389: INFO: Pod "pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19": Phase="Pending", Reason="", readiness=false. Elapsed: 14.696187ms Jun 26 22:01:23.402: INFO: Pod "pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027394026s Jun 26 22:01:25.406: INFO: Pod "pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031735548s STEP: Saw pod success Jun 26 22:01:25.406: INFO: Pod "pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19" satisfied condition "success or failure" Jun 26 22:01:25.409: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19 container configmap-volume-test: STEP: delete the pod Jun 26 22:01:25.516: INFO: Waiting for pod pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19 to disappear Jun 26 22:01:25.533: INFO: Pod pod-configmaps-cc24c3af-4b44-4136-8fbe-5325037add19 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:25.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8571" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2935,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:25.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jun 26 22:01:29.720: INFO: Pod pod-hostip-fe49739e-4e04-4573-9af8-8fa4a85a5c5e has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:29.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6865" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2947,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:29.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 26 22:01:29.842: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:37.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9517" for this suite. • [SLOW TEST:8.216 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":161,"skipped":2957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:37.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:01:38.083: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 26 22:01:43.086: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 26 22:01:43.086: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 26 22:01:43.134: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3610 /apis/apps/v1/namespaces/deployment-3610/deployments/test-cleanup-deployment 946f9b65-95a7-4167-a005-a823b9c303bc 27546180 1 2020-06-26 22:01:43 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003674fa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 26 22:01:43.165: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3610 /apis/apps/v1/namespaces/deployment-3610/replicasets/test-cleanup-deployment-55ffc6b7b6 4e8f8b0b-3ff9-4ed8-9253-bf94ba1f6f71 27546184 1 2020-06-26 22:01:43 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 946f9b65-95a7-4167-a005-a823b9c303bc 0xc003d94127 0xc003d94128}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d94198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 22:01:43.165: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 26 22:01:43.166: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3610 /apis/apps/v1/namespaces/deployment-3610/replicasets/test-cleanup-controller f9895ae7-24bb-4dee-a90d-28f828cf1ee8 27546183 1 2020-06-26 22:01:38 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 946f9b65-95a7-4167-a005-a823b9c303bc 0xc003d94057 0xc003d94058}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003d940b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 26 22:01:43.202: INFO: Pod "test-cleanup-controller-dj5jq" is available: &Pod{ObjectMeta:{test-cleanup-controller-dj5jq test-cleanup-controller- deployment-3610 /api/v1/namespaces/deployment-3610/pods/test-cleanup-controller-dj5jq 42f3e608-53ad-48c9-8725-18961f9d8113 27546171 0 2020-06-26 22:01:38 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller f9895ae7-24bb-4dee-a90d-28f828cf1ee8 0xc003d945c7 0xc003d945c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jtrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jtrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jtrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:01:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:01:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:01:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:01:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.162,StartTime:2020-06-26 22:01:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 22:01:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://db7256d099757e24aeec56e3bbad7ea78ce0c89915c9b4ab5a7866c53b41c738,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 22:01:43.203: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-kt5w5" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-kt5w5 test-cleanup-deployment-55ffc6b7b6- deployment-3610 /api/v1/namespaces/deployment-3610/pods/test-cleanup-deployment-55ffc6b7b6-kt5w5 14194f78-2fe8-40ad-9e47-94227bc17ed3 27546190 0 2020-06-26 22:01:43 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 4e8f8b0b-3ff9-4ed8-9253-bf94ba1f6f71 0xc003d94757 0xc003d94758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jtrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jtrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jtrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:01:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:43.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3610" for this suite. • [SLOW TEST:5.379 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":162,"skipped":2997,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:43.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Jun 26 22:01:43.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1801' Jun 26 22:01:48.659: INFO: stderr: "" Jun 26 22:01:48.659: INFO: stdout: "pod/pause created\n" Jun 26 22:01:48.659: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 26 22:01:48.659: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1801" to be "running and ready" Jun 26 22:01:48.697: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.558468ms Jun 26 22:01:50.739: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080715707s Jun 26 22:01:52.744: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.084823536s Jun 26 22:01:52.744: INFO: Pod "pause" satisfied condition "running and ready" Jun 26 22:01:52.744: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jun 26 22:01:52.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1801' Jun 26 22:01:52.855: INFO: stderr: "" Jun 26 22:01:52.855: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 26 22:01:52.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1801' Jun 26 22:01:52.948: INFO: stderr: "" Jun 26 22:01:52.948: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 26 22:01:52.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1801' Jun 26 22:01:53.070: INFO: stderr: "" Jun 26 22:01:53.070: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 26 22:01:53.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1801' Jun 26 22:01:53.180: INFO: stderr: "" Jun 26 22:01:53.180: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Jun 26 22:01:53.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1801' Jun 26 22:01:53.304: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 22:01:53.304: INFO: stdout: "pod \"pause\" force deleted\n" Jun 26 22:01:53.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1801' Jun 26 22:01:53.516: INFO: stderr: "No resources found in kubectl-1801 namespace.\n" Jun 26 22:01:53.516: INFO: stdout: "" Jun 26 22:01:53.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1801 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 26 22:01:53.799: INFO: stderr: "" Jun 26 22:01:53.799: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:53.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1801" for this suite. • [SLOW TEST:10.466 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":163,"skipped":3006,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:53.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:01:53.943: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.627885ms) Jun 26 22:01:53.946: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.967869ms) Jun 26 22:01:53.949: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.289344ms) Jun 26 22:01:53.952: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.996653ms) Jun 26 22:01:53.955: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.187361ms) Jun 26 22:01:53.959: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.141864ms) Jun 26 22:01:53.962: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.682528ms) Jun 26 22:01:53.966: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.620192ms) Jun 26 22:01:53.972: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.003207ms) Jun 26 22:01:53.976: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.064785ms) Jun 26 22:01:53.980: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.063906ms) Jun 26 22:01:53.984: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.706508ms) Jun 26 22:01:53.987: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.285212ms) Jun 26 22:01:53.990: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.845031ms) Jun 26 22:01:53.994: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.342197ms) Jun 26 22:01:53.996: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.691728ms) Jun 26 22:01:54.000: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.563297ms) Jun 26 22:01:54.003: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.046362ms) Jun 26 22:01:54.007: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.076533ms) Jun 26 22:01:54.010: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.091417ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:54.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-116" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":164,"skipped":3017,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:54.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:01:54.320: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 14.475276ms) Jun 26 22:01:54.326: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.205611ms) Jun 26 22:01:54.329: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.404771ms) Jun 26 22:01:54.332: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.067784ms) Jun 26 22:01:54.335: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.992989ms) Jun 26 22:01:54.339: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.327169ms) Jun 26 22:01:54.341: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.746031ms) Jun 26 22:01:54.344: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.463561ms) Jun 26 22:01:54.346: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.498402ms) Jun 26 22:01:54.349: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.301728ms) Jun 26 22:01:54.351: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.011232ms) Jun 26 22:01:54.353: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.191849ms) Jun 26 22:01:54.355: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.358111ms) Jun 26 22:01:54.358: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.342837ms) Jun 26 22:01:54.360: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.474632ms) Jun 26 22:01:54.363: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.549032ms) Jun 26 22:01:54.366: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.759006ms) Jun 26 22:01:54.368: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.675493ms) Jun 26 22:01:54.371: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.765232ms) Jun 26 22:01:54.374: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.890221ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:54.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1384" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":165,"skipped":3035,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:54.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-8fd71ece-b5b5-44d7-a14a-0ca5fde10ea9 STEP: Creating a pod to test consume secrets Jun 26 22:01:54.456: INFO: Waiting up to 5m0s for pod "pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452" in namespace "secrets-2658" to be "success or failure" Jun 26 22:01:54.460: INFO: Pod "pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980071ms Jun 26 22:01:56.465: INFO: Pod "pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008660777s Jun 26 22:01:58.470: INFO: Pod "pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013546386s STEP: Saw pod success Jun 26 22:01:58.470: INFO: Pod "pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452" satisfied condition "success or failure" Jun 26 22:01:58.473: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452 container secret-volume-test: STEP: delete the pod Jun 26 22:01:58.493: INFO: Waiting for pod pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452 to disappear Jun 26 22:01:58.541: INFO: Pod pod-secrets-d6e68a9a-f996-4a8e-b743-26e82b50c452 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:01:58.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2658" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":3037,"failed":0} ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:01:58.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:01:58.582: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:02:02.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1433" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":3037,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:02:02.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 26 22:02:02.873: INFO: Waiting up to 5m0s for pod "pod-161fed98-3c87-46cf-a297-293b15c96d43" in namespace "emptydir-7887" to be "success or failure" Jun 26 22:02:02.883: INFO: Pod "pod-161fed98-3c87-46cf-a297-293b15c96d43": Phase="Pending", Reason="", readiness=false. Elapsed: 10.173062ms Jun 26 22:02:04.886: INFO: Pod "pod-161fed98-3c87-46cf-a297-293b15c96d43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01299753s Jun 26 22:02:06.890: INFO: Pod "pod-161fed98-3c87-46cf-a297-293b15c96d43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017317075s STEP: Saw pod success Jun 26 22:02:06.890: INFO: Pod "pod-161fed98-3c87-46cf-a297-293b15c96d43" satisfied condition "success or failure" Jun 26 22:02:06.893: INFO: Trying to get logs from node jerma-worker pod pod-161fed98-3c87-46cf-a297-293b15c96d43 container test-container: STEP: delete the pod Jun 26 22:02:06.915: INFO: Waiting for pod pod-161fed98-3c87-46cf-a297-293b15c96d43 to disappear Jun 26 22:02:06.919: INFO: Pod pod-161fed98-3c87-46cf-a297-293b15c96d43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:02:06.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7887" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":3039,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:02:06.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-07880176-a03e-48e0-b72e-34f474721eef STEP: Creating a pod to test consume configMaps Jun 26 22:02:07.077: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843" in namespace "projected-2314" to be "success or failure" Jun 26 22:02:07.110: INFO: Pod "pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843": Phase="Pending", Reason="", readiness=false. Elapsed: 32.607317ms Jun 26 22:02:09.114: INFO: Pod "pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036591196s Jun 26 22:02:11.118: INFO: Pod "pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040451206s STEP: Saw pod success Jun 26 22:02:11.118: INFO: Pod "pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843" satisfied condition "success or failure" Jun 26 22:02:11.120: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843 container projected-configmap-volume-test: STEP: delete the pod Jun 26 22:02:11.142: INFO: Waiting for pod pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843 to disappear Jun 26 22:02:11.153: INFO: Pod pod-projected-configmaps-7aee0d5b-a984-4340-9c76-cb7d377a5843 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:02:11.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2314" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":3048,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:02:11.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jun 26 22:02:11.249: INFO: Waiting up to 5m0s for pod "pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f" in namespace "emptydir-8816" to be "success or failure" Jun 26 22:02:11.267: INFO: Pod "pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.634947ms Jun 26 22:02:13.271: INFO: Pod "pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022262825s Jun 26 22:02:15.276: INFO: Pod "pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026518093s STEP: Saw pod success Jun 26 22:02:15.276: INFO: Pod "pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f" satisfied condition "success or failure" Jun 26 22:02:15.278: INFO: Trying to get logs from node jerma-worker pod pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f container test-container: STEP: delete the pod Jun 26 22:02:15.380: INFO: Waiting for pod pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f to disappear Jun 26 22:02:15.401: INFO: Pod pod-cb9fc5ff-9446-48af-8d2e-d3959d2b752f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:02:15.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8816" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":3082,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:02:15.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:02:15.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c" in namespace "downward-api-780" to be "success or failure" Jun 26 22:02:15.613: INFO: Pod "downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.414858ms Jun 26 22:02:17.617: INFO: Pod "downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02396412s Jun 26 22:02:19.621: INFO: Pod "downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028290143s STEP: Saw pod success Jun 26 22:02:19.621: INFO: Pod "downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c" satisfied condition "success or failure" Jun 26 22:02:19.624: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c container client-container: STEP: delete the pod Jun 26 22:02:19.643: INFO: Waiting for pod downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c to disappear Jun 26 22:02:19.652: INFO: Pod downwardapi-volume-74583a71-2ca7-4103-8be3-21623d634d0c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:02:19.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-780" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":3086,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:02:19.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jun 26 22:02:19.811: INFO: Waiting up to 5m0s for pod "client-containers-76cc95e5-77c5-4378-be48-4734964a687c" in namespace "containers-8585" to be "success or failure" Jun 26 22:02:19.821: INFO: Pod "client-containers-76cc95e5-77c5-4378-be48-4734964a687c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05398ms Jun 26 22:02:21.919: INFO: Pod "client-containers-76cc95e5-77c5-4378-be48-4734964a687c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107834882s Jun 26 22:02:23.923: INFO: Pod "client-containers-76cc95e5-77c5-4378-be48-4734964a687c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11231341s STEP: Saw pod success Jun 26 22:02:23.923: INFO: Pod "client-containers-76cc95e5-77c5-4378-be48-4734964a687c" satisfied condition "success or failure" Jun 26 22:02:23.926: INFO: Trying to get logs from node jerma-worker pod client-containers-76cc95e5-77c5-4378-be48-4734964a687c container test-container: STEP: delete the pod Jun 26 22:02:23.966: INFO: Waiting for pod client-containers-76cc95e5-77c5-4378-be48-4734964a687c to disappear Jun 26 22:02:23.979: INFO: Pod client-containers-76cc95e5-77c5-4378-be48-4734964a687c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:02:23.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8585" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":3089,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:02:23.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-3003 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3003 to expose endpoints map[] Jun 26 22:02:24.188: INFO: Get endpoints failed (51.694781ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 26 22:02:25.192: INFO: successfully validated that service multi-endpoint-test in namespace services-3003 exposes endpoints map[] (1.05587457s elapsed) STEP: Creating pod pod1 in namespace services-3003 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3003 to expose endpoints map[pod1:[100]] Jun 26 22:02:29.267: INFO: successfully validated that service multi-endpoint-test in namespace services-3003 exposes endpoints map[pod1:[100]] (4.067505342s elapsed) STEP: Creating pod pod2 in namespace services-3003 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3003 to expose endpoints map[pod1:[100] pod2:[101]] Jun 26 22:02:32.344: INFO: successfully validated that service multi-endpoint-test in namespace services-3003 exposes endpoints map[pod1:[100] pod2:[101]] (3.070315433s elapsed) STEP: Deleting pod pod1 in namespace services-3003 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3003 to expose endpoints map[pod2:[101]] Jun 26 22:02:33.423: INFO: successfully validated that service multi-endpoint-test in namespace services-3003 exposes endpoints map[pod2:[101]] (1.074986692s elapsed) STEP: Deleting pod pod2 in namespace services-3003 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3003 to expose endpoints map[] Jun 26 22:02:34.591: INFO: successfully validated that service multi-endpoint-test in namespace services-3003 exposes endpoints map[] (1.163072747s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:02:34.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3003" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.762 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":173,"skipped":3117,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:02:34.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1231.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1231.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 247.239.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.239.247_udp@PTR;check="$$(dig +tcp +noall +answer +search 247.239.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.239.247_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1231.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1231.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1231.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 247.239.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.239.247_udp@PTR;check="$$(dig +tcp +noall +answer +search 247.239.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.239.247_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 22:02:41.465: INFO: Unable to read wheezy_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.468: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.555: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.558: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.583: INFO: Unable to read jessie_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.585: INFO: Unable to read jessie_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.587: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.589: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:41.603: INFO: Lookups using dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453 failed for: [wheezy_udp@dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_udp@dns-test-service.dns-1231.svc.cluster.local jessie_tcp@dns-test-service.dns-1231.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local] Jun 26 22:02:46.606: INFO: Unable to read wheezy_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.609: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.611: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.614: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.630: INFO: Unable to read jessie_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.633: INFO: Unable to read jessie_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.635: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.637: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:46.653: INFO: Lookups using dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453 failed for: [wheezy_udp@dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_udp@dns-test-service.dns-1231.svc.cluster.local jessie_tcp@dns-test-service.dns-1231.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local] Jun 26 22:02:51.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.615: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.618: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.640: INFO: Unable to read jessie_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.643: INFO: Unable to read jessie_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:51.666: INFO: Lookups using dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453 failed for: [wheezy_udp@dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_udp@dns-test-service.dns-1231.svc.cluster.local jessie_tcp@dns-test-service.dns-1231.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local] Jun 26 22:02:56.607: INFO: Unable to read wheezy_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.610: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.613: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.616: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.643: INFO: Unable to read jessie_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.645: INFO: Unable to read jessie_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.650: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:02:56.668: INFO: Lookups using dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453 failed for: [wheezy_udp@dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_udp@dns-test-service.dns-1231.svc.cluster.local jessie_tcp@dns-test-service.dns-1231.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local] Jun 26 22:03:01.609: INFO: Unable to read wheezy_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.616: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.620: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.664: INFO: Unable to read jessie_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.667: INFO: Unable to read jessie_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.671: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.674: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:01.693: INFO: Lookups using dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453 failed for: [wheezy_udp@dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_udp@dns-test-service.dns-1231.svc.cluster.local jessie_tcp@dns-test-service.dns-1231.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local] Jun 26 22:03:06.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.616: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.619: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.638: INFO: Unable to read jessie_udp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.640: INFO: Unable to read jessie_tcp@dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.643: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.645: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local from pod dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453: the server could not find the requested resource (get pods dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453) Jun 26 22:03:06.663: INFO: Lookups using dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453 failed for: [wheezy_udp@dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@dns-test-service.dns-1231.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_udp@dns-test-service.dns-1231.svc.cluster.local jessie_tcp@dns-test-service.dns-1231.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1231.svc.cluster.local] Jun 26 22:03:11.687: INFO: DNS probes using dns-1231/dns-test-cb69c876-abaa-47e6-bf15-fc48df4ad453 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:03:12.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1231" for this suite. • [SLOW TEST:37.717 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":174,"skipped":3128,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:03:12.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2137dcb2-0026-4e69-88a9-af2ab4633cc4 STEP: Creating a pod to test consume configMaps Jun 26 22:03:12.618: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d" in namespace "projected-4855" to be "success or failure" Jun 26 22:03:12.635: INFO: Pod "pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.961953ms Jun 26 22:03:14.647: INFO: Pod "pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028875439s Jun 26 22:03:16.651: INFO: Pod "pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033265448s STEP: Saw pod success Jun 26 22:03:16.651: INFO: Pod "pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d" satisfied condition "success or failure" Jun 26 22:03:16.655: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d container projected-configmap-volume-test: STEP: delete the pod Jun 26 22:03:16.684: INFO: Waiting for pod pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d to disappear Jun 26 22:03:16.688: INFO: Pod pod-projected-configmaps-ffa406bf-5752-43ea-8489-6ce4df7e632d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:03:16.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4855" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:03:16.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 22:03:17.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 22:03:19.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805797, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805797, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805797, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805797, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 22:03:22.197: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:03:32.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5362" for this suite. STEP: Destroying namespace "webhook-5362-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.939 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":176,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:03:32.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5636/secret-test-30900268-9b45-4f06-9558-1f531f3b59e2 STEP: Creating a pod to test consume secrets Jun 26 22:03:32.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1" in namespace "secrets-5636" to be "success or failure" Jun 26 22:03:32.701: INFO: Pod "pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642705ms Jun 26 22:03:34.740: INFO: Pod "pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044016699s Jun 26 22:03:36.744: INFO: Pod "pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048068595s STEP: Saw pod success Jun 26 22:03:36.744: INFO: Pod "pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1" satisfied condition "success or failure" Jun 26 22:03:36.747: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1 container env-test: STEP: delete the pod Jun 26 22:03:36.793: INFO: Waiting for pod pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1 to disappear Jun 26 22:03:36.854: INFO: Pod pod-configmaps-65c2a999-31b4-4844-8807-8dac6d8955b1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:03:36.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5636" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":3171,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:03:36.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 22:03:37.473: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 22:03:39.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805817, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805817, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805817, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805817, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 22:03:42.512: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:03:54.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-834" for this suite. STEP: Destroying namespace "webhook-834-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.890 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":178,"skipped":3171,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:03:54.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 26 22:03:59.397: INFO: Successfully updated pod "labelsupdate1b304017-62f6-4d1c-89c4-b66f194bce79" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:04:01.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-277" for this suite. • [SLOW TEST:6.692 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3185,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:04:01.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:04:01.524: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 26 22:04:04.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2295 create -f -' Jun 26 22:04:09.541: INFO: stderr: "" Jun 26 22:04:09.541: INFO: stdout: "e2e-test-crd-publish-openapi-654-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 26 22:04:09.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2295 delete e2e-test-crd-publish-openapi-654-crds test-cr' Jun 26 22:04:09.656: INFO: stderr: "" Jun 26 22:04:09.656: INFO: stdout: "e2e-test-crd-publish-openapi-654-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 26 22:04:09.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2295 apply -f -' Jun 26 22:04:10.973: INFO: stderr: "" Jun 26 22:04:10.973: INFO: stdout: "e2e-test-crd-publish-openapi-654-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 26 22:04:10.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2295 delete e2e-test-crd-publish-openapi-654-crds test-cr' Jun 26 22:04:11.078: INFO: stderr: "" Jun 26 22:04:11.078: INFO: stdout: "e2e-test-crd-publish-openapi-654-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 26 22:04:11.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-654-crds' Jun 26 22:04:11.832: INFO: stderr: "" Jun 26 22:04:11.832: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-654-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:04:13.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2295" for this suite. • [SLOW TEST:12.265 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":180,"skipped":3191,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:04:13.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 26 22:04:13.778: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 22:04:13.789: INFO: Waiting for terminating namespaces to be deleted... Jun 26 22:04:13.792: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 26 22:04:13.799: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 22:04:13.799: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 22:04:13.799: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 22:04:13.799: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 22:04:13.799: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 26 22:04:13.803: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 22:04:13.803: INFO: Container kindnet-cni ready: true, restart count 2 Jun 26 22:04:13.803: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 26 22:04:13.803: INFO: Container kube-bench ready: false, restart count 0 Jun 26 22:04:13.803: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 26 22:04:13.803: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 22:04:13.803: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 26 22:04:13.803: INFO: Container kube-hunter ready: false, restart count 0 Jun 26 22:04:13.803: INFO: labelsupdate1b304017-62f6-4d1c-89c4-b66f194bce79 from downward-api-277 started at 2020-06-26 22:03:54 +0000 UTC (1 container statuses recorded) Jun 26 22:04:13.803: INFO: Container client-container ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ec5218fd-734e-4c4e-bc46-56789dd1acc5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ec5218fd-734e-4c4e-bc46-56789dd1acc5 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ec5218fd-734e-4c4e-bc46-56789dd1acc5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:04:21.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4339" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.278 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":181,"skipped":3197,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:04:21.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 26 22:04:22.067: INFO: Waiting up to 5m0s for pod "pod-dcb24061-d80d-40e6-b910-5fb126a956a7" in namespace "emptydir-832" to be "success or failure" Jun 26 22:04:22.069: INFO: Pod "pod-dcb24061-d80d-40e6-b910-5fb126a956a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441505ms Jun 26 22:04:24.073: INFO: Pod "pod-dcb24061-d80d-40e6-b910-5fb126a956a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006350061s Jun 26 22:04:26.077: INFO: Pod "pod-dcb24061-d80d-40e6-b910-5fb126a956a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010378581s STEP: Saw pod success Jun 26 22:04:26.077: INFO: Pod "pod-dcb24061-d80d-40e6-b910-5fb126a956a7" satisfied condition "success or failure" Jun 26 22:04:26.079: INFO: Trying to get logs from node jerma-worker pod pod-dcb24061-d80d-40e6-b910-5fb126a956a7 container test-container: STEP: delete the pod Jun 26 22:04:26.118: INFO: Waiting for pod pod-dcb24061-d80d-40e6-b910-5fb126a956a7 to disappear Jun 26 22:04:26.140: INFO: Pod pod-dcb24061-d80d-40e6-b910-5fb126a956a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:04:26.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-832" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3214,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:04:26.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 26 22:04:26.253: INFO: Created pod &Pod{ObjectMeta:{dns-7696 dns-7696 /api/v1/namespaces/dns-7696/pods/dns-7696 97fdfb8a-b1d5-429d-908e-4a0196331d8e 27547339 0 2020-06-26 22:04:26 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f77k6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f77k6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f77k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jun 26 22:04:30.267: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7696 PodName:dns-7696 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:04:30.267: INFO: >>> kubeConfig: /root/.kube/config I0626 22:04:30.303209 6 log.go:172] (0xc003ba16b0) (0xc001e2e6e0) Create stream I0626 22:04:30.303246 6 log.go:172] (0xc003ba16b0) (0xc001e2e6e0) Stream added, broadcasting: 1 I0626 22:04:30.305797 6 log.go:172] (0xc003ba16b0) Reply frame received for 1 I0626 22:04:30.305847 6 log.go:172] (0xc003ba16b0) (0xc001e2e780) Create stream I0626 22:04:30.305870 6 log.go:172] (0xc003ba16b0) (0xc001e2e780) Stream added, broadcasting: 3 I0626 22:04:30.306983 6 log.go:172] (0xc003ba16b0) Reply frame received for 3 I0626 22:04:30.307017 6 log.go:172] (0xc003ba16b0) (0xc0026cc3c0) Create stream I0626 22:04:30.307029 6 log.go:172] (0xc003ba16b0) (0xc0026cc3c0) Stream added, broadcasting: 5 I0626 22:04:30.308004 6 log.go:172] (0xc003ba16b0) Reply frame received for 5 I0626 22:04:30.405095 6 log.go:172] (0xc003ba16b0) Data frame received for 3 I0626 22:04:30.405275 6 log.go:172] (0xc001e2e780) (3) Data frame handling I0626 22:04:30.405296 6 log.go:172] (0xc001e2e780) (3) Data frame sent I0626 22:04:30.407405 6 log.go:172] (0xc003ba16b0) Data frame received for 3 I0626 22:04:30.407437 6 log.go:172] (0xc001e2e780) (3) Data frame handling I0626 22:04:30.407457 6 log.go:172] (0xc003ba16b0) Data frame received for 5 I0626 22:04:30.407463 6 log.go:172] (0xc0026cc3c0) (5) Data frame handling I0626 22:04:30.408956 6 log.go:172] (0xc003ba16b0) Data frame received for 1 I0626 22:04:30.408979 6 log.go:172] (0xc001e2e6e0) (1) Data frame handling I0626 22:04:30.409012 6 log.go:172] (0xc001e2e6e0) (1) Data frame sent I0626 22:04:30.409029 6 log.go:172] (0xc003ba16b0) (0xc001e2e6e0) Stream removed, broadcasting: 1 I0626 22:04:30.409050 6 log.go:172] (0xc003ba16b0) Go away received I0626 22:04:30.409286 6 log.go:172] (0xc003ba16b0) (0xc001e2e6e0) Stream removed, broadcasting: 1 I0626 22:04:30.409315 6 log.go:172] (0xc003ba16b0) (0xc001e2e780) Stream removed, broadcasting: 3 I0626 22:04:30.409332 6 log.go:172] (0xc003ba16b0) (0xc0026cc3c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jun 26 22:04:30.409: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7696 PodName:dns-7696 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:04:30.409: INFO: >>> kubeConfig: /root/.kube/config I0626 22:04:30.440927 6 log.go:172] (0xc0045f8f20) (0xc0026cca00) Create stream I0626 22:04:30.440953 6 log.go:172] (0xc0045f8f20) (0xc0026cca00) Stream added, broadcasting: 1 I0626 22:04:30.442737 6 log.go:172] (0xc0045f8f20) Reply frame received for 1 I0626 22:04:30.442780 6 log.go:172] (0xc0045f8f20) (0xc0026ccb40) Create stream I0626 22:04:30.442795 6 log.go:172] (0xc0045f8f20) (0xc0026ccb40) Stream added, broadcasting: 3 I0626 22:04:30.443847 6 log.go:172] (0xc0045f8f20) Reply frame received for 3 I0626 22:04:30.443873 6 log.go:172] (0xc0045f8f20) (0xc0026cce60) Create stream I0626 22:04:30.443887 6 log.go:172] (0xc0045f8f20) (0xc0026cce60) Stream added, broadcasting: 5 I0626 22:04:30.444775 6 log.go:172] (0xc0045f8f20) Reply frame received for 5 I0626 22:04:30.509534 6 log.go:172] (0xc0045f8f20) Data frame received for 3 I0626 22:04:30.509576 6 log.go:172] (0xc0026ccb40) (3) Data frame handling I0626 22:04:30.509585 6 log.go:172] (0xc0026ccb40) (3) Data frame sent I0626 22:04:30.511149 6 log.go:172] (0xc0045f8f20) Data frame received for 5 I0626 22:04:30.511172 6 log.go:172] (0xc0026cce60) (5) Data frame handling I0626 22:04:30.511201 6 log.go:172] (0xc0045f8f20) Data frame received for 3 I0626 22:04:30.511211 6 log.go:172] (0xc0026ccb40) (3) Data frame handling I0626 22:04:30.514367 6 log.go:172] (0xc0045f8f20) Data frame received for 1 I0626 22:04:30.514392 6 log.go:172] (0xc0026cca00) (1) Data frame handling I0626 22:04:30.514412 6 log.go:172] (0xc0026cca00) (1) Data frame sent I0626 22:04:30.514426 6 log.go:172] (0xc0045f8f20) (0xc0026cca00) Stream removed, broadcasting: 1 I0626 22:04:30.514441 6 log.go:172] (0xc0045f8f20) Go away received I0626 22:04:30.514574 6 log.go:172] (0xc0045f8f20) (0xc0026cca00) Stream removed, broadcasting: 1 I0626 22:04:30.514593 6 log.go:172] (0xc0045f8f20) (0xc0026ccb40) Stream removed, broadcasting: 3 I0626 22:04:30.514604 6 log.go:172] (0xc0045f8f20) (0xc0026cce60) Stream removed, broadcasting: 5 Jun 26 22:04:30.514: INFO: Deleting pod dns-7696... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:04:30.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7696" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":183,"skipped":3217,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:04:30.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:04:30.868: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 26 22:04:30.916: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:31.010: INFO: Number of nodes with available pods: 0 Jun 26 22:04:31.010: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:04:32.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:32.018: INFO: Number of nodes with available pods: 0 Jun 26 22:04:32.018: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:04:33.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:33.018: INFO: Number of nodes with available pods: 0 Jun 26 22:04:33.018: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:04:34.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:34.018: INFO: Number of nodes with available pods: 0 Jun 26 22:04:34.018: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:04:35.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:35.019: INFO: Number of nodes with available pods: 1 Jun 26 22:04:35.019: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:04:36.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:36.019: INFO: Number of nodes with available pods: 2 Jun 26 22:04:36.019: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 26 22:04:36.142: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:36.142: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:36.146: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:37.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:37.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:37.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:38.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:38.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:38.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:39.152: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:39.152: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:39.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:40.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:40.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:40.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:40.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:41.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:41.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:41.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:41.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:42.150: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:42.150: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:42.150: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:42.154: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:43.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:43.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:43.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:43.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:44.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:44.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:44.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:44.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:45.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:45.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:45.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:45.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:46.160: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:46.160: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:46.160: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:46.166: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:47.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:47.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:47.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:47.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:48.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:48.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:48.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:48.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:49.151: INFO: Wrong image for pod: daemon-set-mh2ns. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:49.151: INFO: Pod daemon-set-mh2ns is not available Jun 26 22:04:49.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:49.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:50.151: INFO: Pod daemon-set-8nsgq is not available Jun 26 22:04:50.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:50.154: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:51.151: INFO: Pod daemon-set-8nsgq is not available Jun 26 22:04:51.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:51.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:52.151: INFO: Pod daemon-set-8nsgq is not available Jun 26 22:04:52.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:52.154: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:53.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:53.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:54.167: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:54.167: INFO: Pod daemon-set-xbxc6 is not available Jun 26 22:04:54.172: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:55.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:55.151: INFO: Pod daemon-set-xbxc6 is not available Jun 26 22:04:55.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:56.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:56.151: INFO: Pod daemon-set-xbxc6 is not available Jun 26 22:04:56.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:57.151: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:57.151: INFO: Pod daemon-set-xbxc6 is not available Jun 26 22:04:57.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:58.150: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:58.150: INFO: Pod daemon-set-xbxc6 is not available Jun 26 22:04:58.153: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:04:59.152: INFO: Wrong image for pod: daemon-set-xbxc6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 22:04:59.152: INFO: Pod daemon-set-xbxc6 is not available Jun 26 22:04:59.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:05:00.154: INFO: Pod daemon-set-pv7mg is not available Jun 26 22:05:00.158: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 26 22:05:00.160: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:05:00.162: INFO: Number of nodes with available pods: 1 Jun 26 22:05:00.162: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 22:05:01.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:05:01.171: INFO: Number of nodes with available pods: 1 Jun 26 22:05:01.171: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 22:05:02.215: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:05:02.219: INFO: Number of nodes with available pods: 2 Jun 26 22:05:02.219: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9769, will wait for the garbage collector to delete the pods Jun 26 22:05:02.293: INFO: Deleting DaemonSet.extensions daemon-set took: 6.103963ms Jun 26 22:05:02.593: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.352966ms Jun 26 22:05:09.296: INFO: Number of nodes with available pods: 0 Jun 26 22:05:09.296: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 22:05:09.298: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9769/daemonsets","resourceVersion":"27547596"},"items":null} Jun 26 22:05:09.301: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9769/pods","resourceVersion":"27547596"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:09.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9769" for this suite. • [SLOW TEST:38.753 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":184,"skipped":3220,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:09.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:09.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4278" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":185,"skipped":3226,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:09.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 26 22:05:09.484: INFO: Waiting up to 5m0s for pod "downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb" in namespace "downward-api-1011" to be "success or failure" Jun 26 22:05:09.488: INFO: Pod "downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.427932ms Jun 26 22:05:11.492: INFO: Pod "downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007659907s Jun 26 22:05:13.496: INFO: Pod "downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011644484s STEP: Saw pod success Jun 26 22:05:13.496: INFO: Pod "downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb" satisfied condition "success or failure" Jun 26 22:05:13.499: INFO: Trying to get logs from node jerma-worker pod downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb container dapi-container: STEP: delete the pod Jun 26 22:05:13.536: INFO: Waiting for pod downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb to disappear Jun 26 22:05:13.542: INFO: Pod downward-api-2e5a0e5c-482a-439c-b9d5-54fb9d1c5ffb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:13.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1011" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3228,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:13.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:05:13.618: INFO: Waiting up to 5m0s for pod "downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876" in namespace "projected-2474" to be "success or failure" Jun 26 22:05:13.621: INFO: Pod "downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.991677ms Jun 26 22:05:15.626: INFO: Pod "downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007862742s Jun 26 22:05:17.639: INFO: Pod "downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021189661s STEP: Saw pod success Jun 26 22:05:17.639: INFO: Pod "downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876" satisfied condition "success or failure" Jun 26 22:05:17.642: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876 container client-container: STEP: delete the pod Jun 26 22:05:17.659: INFO: Waiting for pod downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876 to disappear Jun 26 22:05:17.663: INFO: Pod downwardapi-volume-468a422c-610c-4565-826c-b8a8954a7876 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:17.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2474" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3228,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:17.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6840.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6840.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 22:05:24.020: INFO: DNS probes using dns-6840/dns-test-98c28cbe-9c0d-4b03-8912-32e8a5186b47 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:24.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6840" for this suite. • [SLOW TEST:6.438 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":188,"skipped":3231,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:24.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:30.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7660" for this suite. STEP: Destroying namespace "nsdeletetest-447" for this suite. Jun 26 22:05:30.830: INFO: Namespace nsdeletetest-447 was already deleted STEP: Destroying namespace "nsdeletetest-5416" for this suite. • [SLOW TEST:6.745 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":189,"skipped":3238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:30.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:37.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3867" for this suite. • [SLOW TEST:6.357 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":190,"skipped":3270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:37.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:05:37.265: INFO: Creating ReplicaSet my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463 Jun 26 22:05:37.364: INFO: Pod name my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463: Found 0 pods out of 1 Jun 26 22:05:42.368: INFO: Pod name my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463: Found 1 pods out of 1 Jun 26 22:05:42.368: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463" is running Jun 26 22:05:42.375: INFO: Pod "my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463-6c5h4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:05:37 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:05:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:05:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:05:37 +0000 UTC Reason: Message:}]) Jun 26 22:05:42.375: INFO: Trying to dial the pod Jun 26 22:05:47.445: INFO: Controller my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463: Got expected result from replica 1 [my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463-6c5h4]: "my-hostname-basic-fbba3e6d-4bbe-4e49-91a5-9c6d8a098463-6c5h4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:47.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7461" for this suite. • [SLOW TEST:10.240 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":191,"skipped":3374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:47.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 22:05:47.970: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 22:05:49.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805947, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805947, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805948, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728805947, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 22:05:53.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:05:53.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1720" for this suite. STEP: Destroying namespace "webhook-1720-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.077 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":192,"skipped":3398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:05:53.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:10.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1539" for this suite. • [SLOW TEST:17.185 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":193,"skipped":3429,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:10.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Jun 26 22:06:10.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4026 -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 26 22:06:10.929: INFO: stderr: "" Jun 26 22:06:10.929: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jun 26 22:06:10.929: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 26 22:06:10.929: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4026" to be "running and ready, or succeeded" Jun 26 22:06:10.935: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.903623ms Jun 26 22:06:12.939: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009726288s Jun 26 22:06:14.943: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.014364381s Jun 26 22:06:14.943: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 26 22:06:14.943: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 26 22:06:14.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4026' Jun 26 22:06:15.059: INFO: stderr: "" Jun 26 22:06:15.059: INFO: stdout: "I0626 22:06:13.233309 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/w55 265\nI0626 22:06:13.433546 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/h4q 202\nI0626 22:06:13.633497 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/4wss 517\nI0626 22:06:13.833492 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/jp82 307\nI0626 22:06:14.033477 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/tsb 524\nI0626 22:06:14.233489 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/kdg 333\nI0626 22:06:14.433442 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/kp9c 325\nI0626 22:06:14.633593 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/44h 208\nI0626 22:06:14.833524 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/nlg 377\nI0626 22:06:15.033457 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/6mmb 202\n" STEP: limiting log lines Jun 26 22:06:15.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4026 --tail=1' Jun 26 22:06:15.167: INFO: stderr: "" Jun 26 22:06:15.167: INFO: stdout: "I0626 22:06:15.033457 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/6mmb 202\n" Jun 26 22:06:15.167: INFO: got output "I0626 22:06:15.033457 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/6mmb 202\n" STEP: limiting log bytes Jun 26 22:06:15.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4026 --limit-bytes=1' Jun 26 22:06:15.268: INFO: stderr: "" Jun 26 22:06:15.268: INFO: stdout: "I" Jun 26 22:06:15.268: INFO: got output "I" STEP: exposing timestamps Jun 26 22:06:15.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4026 --tail=1 --timestamps' Jun 26 22:06:15.363: INFO: stderr: "" Jun 26 22:06:15.363: INFO: stdout: "2020-06-26T22:06:15.233585282Z I0626 22:06:15.233427 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/g5f 572\n" Jun 26 22:06:15.363: INFO: got output "2020-06-26T22:06:15.233585282Z I0626 22:06:15.233427 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/g5f 572\n" STEP: restricting to a time range Jun 26 22:06:17.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4026 --since=1s' Jun 26 22:06:17.976: INFO: stderr: "" Jun 26 22:06:17.976: INFO: stdout: "I0626 22:06:17.033500 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/kdnh 346\nI0626 22:06:17.233474 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/xsj 337\nI0626 22:06:17.433423 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/x46 499\nI0626 22:06:17.633473 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/spf 369\nI0626 22:06:17.833521 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/b5xv 575\n" Jun 26 22:06:17.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4026 --since=24h' Jun 26 22:06:18.085: INFO: stderr: "" Jun 26 22:06:18.085: INFO: stdout: "I0626 22:06:13.233309 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/w55 265\nI0626 22:06:13.433546 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/h4q 202\nI0626 22:06:13.633497 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/4wss 517\nI0626 22:06:13.833492 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/jp82 307\nI0626 22:06:14.033477 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/tsb 524\nI0626 22:06:14.233489 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/kdg 333\nI0626 22:06:14.433442 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/kp9c 325\nI0626 22:06:14.633593 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/44h 208\nI0626 22:06:14.833524 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/nlg 377\nI0626 22:06:15.033457 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/6mmb 202\nI0626 22:06:15.233427 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/g5f 572\nI0626 22:06:15.433462 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/qw2 465\nI0626 22:06:15.633487 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/l5d 241\nI0626 22:06:15.833433 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/zbs 207\nI0626 22:06:16.033486 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/5b2 283\nI0626 22:06:16.233467 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/rlnt 529\nI0626 22:06:16.433470 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/hdd 218\nI0626 22:06:16.633499 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/zl2 450\nI0626 22:06:16.833461 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/lskt 343\nI0626 22:06:17.033500 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/kdnh 346\nI0626 22:06:17.233474 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/xsj 337\nI0626 22:06:17.433423 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/x46 499\nI0626 22:06:17.633473 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/spf 369\nI0626 22:06:17.833521 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/b5xv 575\nI0626 22:06:18.033425 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/zdsh 245\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Jun 26 22:06:18.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4026' Jun 26 22:06:20.338: INFO: stderr: "" Jun 26 22:06:20.338: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:20.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4026" for this suite. • [SLOW TEST:9.627 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":194,"skipped":3435,"failed":0} S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:20.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:25.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9650" for this suite. • [SLOW TEST:5.104 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":195,"skipped":3436,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:25.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jun 26 22:06:25.541: INFO: Waiting up to 5m0s for pod "var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a" in namespace "var-expansion-6631" to be "success or failure" Jun 26 22:06:25.545: INFO: Pod "var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.597197ms Jun 26 22:06:27.575: INFO: Pod "var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03377127s Jun 26 22:06:29.580: INFO: Pod "var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038319408s STEP: Saw pod success Jun 26 22:06:29.580: INFO: Pod "var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a" satisfied condition "success or failure" Jun 26 22:06:29.583: INFO: Trying to get logs from node jerma-worker pod var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a container dapi-container: STEP: delete the pod Jun 26 22:06:29.628: INFO: Waiting for pod var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a to disappear Jun 26 22:06:29.640: INFO: Pod var-expansion-331b1b50-5349-427b-a853-87f5832c3e8a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:29.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6631" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3447,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:29.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-8e3ac32a-f24d-4e33-8d9d-be26ea19d213 STEP: Creating a pod to test consume secrets Jun 26 22:06:29.732: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33" in namespace "projected-5121" to be "success or failure" Jun 26 22:06:29.736: INFO: Pod "pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.540186ms Jun 26 22:06:31.740: INFO: Pod "pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008095756s Jun 26 22:06:33.744: INFO: Pod "pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012331087s STEP: Saw pod success Jun 26 22:06:33.744: INFO: Pod "pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33" satisfied condition "success or failure" Jun 26 22:06:33.747: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33 container projected-secret-volume-test: STEP: delete the pod Jun 26 22:06:33.770: INFO: Waiting for pod pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33 to disappear Jun 26 22:06:33.773: INFO: Pod pod-projected-secrets-7234d07d-e571-47a7-b671-52a297dc3f33 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:33.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5121" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3448,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:33.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 26 22:06:38.408: INFO: Successfully updated pod "pod-update-6e27af76-23d4-4ba0-89de-60dfdbbfb345" STEP: verifying the updated pod is in kubernetes Jun 26 22:06:38.417: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:38.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3100" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3470,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:38.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jun 26 22:06:38.543: INFO: Waiting up to 5m0s for pod "var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b" in namespace "var-expansion-8395" to be "success or failure" Jun 26 22:06:38.568: INFO: Pod "var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.327197ms Jun 26 22:06:40.587: INFO: Pod "var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044054117s Jun 26 22:06:42.592: INFO: Pod "var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048550154s STEP: Saw pod success Jun 26 22:06:42.592: INFO: Pod "var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b" satisfied condition "success or failure" Jun 26 22:06:42.595: INFO: Trying to get logs from node jerma-worker pod var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b container dapi-container: STEP: delete the pod Jun 26 22:06:42.691: INFO: Waiting for pod var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b to disappear Jun 26 22:06:42.696: INFO: Pod var-expansion-45487883-3542-44ff-9b1b-e0c64c0ef62b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:42.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8395" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3484,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:42.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 26 22:06:42.809: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jun 26 22:06:43.503: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 26 22:06:45.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 22:06:47.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806003, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 22:06:50.332: INFO: Waited 628.43507ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3770" for this suite. • [SLOW TEST:8.238 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":200,"skipped":3502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:50.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-8a1080a8-28ea-4855-83a3-e4e14e9c9a1e STEP: Creating a pod to test consume secrets Jun 26 22:06:51.285: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922" in namespace "projected-4800" to be "success or failure" Jun 26 22:06:51.289: INFO: Pod "pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083264ms Jun 26 22:06:53.330: INFO: Pod "pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044625971s Jun 26 22:06:55.337: INFO: Pod "pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051941117s STEP: Saw pod success Jun 26 22:06:55.337: INFO: Pod "pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922" satisfied condition "success or failure" Jun 26 22:06:55.339: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922 container projected-secret-volume-test: STEP: delete the pod Jun 26 22:06:55.376: INFO: Waiting for pod pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922 to disappear Jun 26 22:06:55.403: INFO: Pod pod-projected-secrets-3db79a18-5ff5-4f6b-927a-cd3c3c76b922 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:55.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4800" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3526,"failed":0} ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:55.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6ad44a2e-e417-4c5c-95d9-820eeec9655b STEP: Creating a pod to test consume secrets Jun 26 22:06:55.593: INFO: Waiting up to 5m0s for pod "pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e" in namespace "secrets-4238" to be "success or failure" Jun 26 22:06:55.607: INFO: Pod "pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.937048ms Jun 26 22:06:57.611: INFO: Pod "pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017636909s Jun 26 22:06:59.614: INFO: Pod "pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021558853s STEP: Saw pod success Jun 26 22:06:59.615: INFO: Pod "pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e" satisfied condition "success or failure" Jun 26 22:06:59.617: INFO: Trying to get logs from node jerma-worker pod pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e container secret-volume-test: STEP: delete the pod Jun 26 22:06:59.678: INFO: Waiting for pod pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e to disappear Jun 26 22:06:59.684: INFO: Pod pod-secrets-6e825f4d-b1a7-435e-87ac-803a510f2e2e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:06:59.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4238" for this suite. STEP: Destroying namespace "secret-namespace-3753" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3526,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:06:59.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 26 22:07:07.827: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 22:07:07.830: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 22:07:09.830: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 22:07:09.847: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 22:07:11.830: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 22:07:11.834: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 22:07:13.830: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 22:07:13.841: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:07:13.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3428" for this suite. • [SLOW TEST:14.223 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3531,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:07:13.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 22:07:14.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 22:07:16.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806034, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806034, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806034, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806034, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 22:07:19.726: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 26 22:07:19.836: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:07:19.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7730" for this suite. STEP: Destroying namespace "webhook-7730-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.107 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":204,"skipped":3540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:07:20.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:07:33.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5957" for this suite. • [SLOW TEST:13.245 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":205,"skipped":3582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:07:33.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4942 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4942 STEP: creating replication controller externalsvc in namespace services-4942 I0626 22:07:33.489089 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4942, replica count: 2 I0626 22:07:36.539630 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 22:07:39.539860 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 26 22:07:39.599: INFO: Creating new exec pod Jun 26 22:07:43.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4942 execpod4zp6s -- /bin/sh -x -c nslookup nodeport-service' Jun 26 22:07:44.084: INFO: stderr: "I0626 22:07:43.783227 2345 log.go:172] (0xc000a026e0) (0xc0009401e0) Create stream\nI0626 22:07:43.783285 2345 log.go:172] (0xc000a026e0) (0xc0009401e0) Stream added, broadcasting: 1\nI0626 22:07:43.785711 2345 log.go:172] (0xc000a026e0) Reply frame received for 1\nI0626 22:07:43.785774 2345 log.go:172] (0xc000a026e0) (0xc00057e6e0) Create stream\nI0626 22:07:43.785798 2345 log.go:172] (0xc000a026e0) (0xc00057e6e0) Stream added, broadcasting: 3\nI0626 22:07:43.786812 2345 log.go:172] (0xc000a026e0) Reply frame received for 3\nI0626 22:07:43.786834 2345 log.go:172] (0xc000a026e0) (0xc0007a7400) Create stream\nI0626 22:07:43.786842 2345 log.go:172] (0xc000a026e0) (0xc0007a7400) Stream added, broadcasting: 5\nI0626 22:07:43.787610 2345 log.go:172] (0xc000a026e0) Reply frame received for 5\nI0626 22:07:43.909984 2345 log.go:172] (0xc000a026e0) Data frame received for 5\nI0626 22:07:43.910028 2345 log.go:172] (0xc0007a7400) (5) Data frame handling\nI0626 22:07:43.910056 2345 log.go:172] (0xc0007a7400) (5) Data frame sent\n+ nslookup nodeport-service\nI0626 22:07:44.073776 2345 log.go:172] (0xc000a026e0) Data frame received for 3\nI0626 22:07:44.073799 2345 log.go:172] (0xc00057e6e0) (3) Data frame handling\nI0626 22:07:44.073814 2345 log.go:172] (0xc00057e6e0) (3) Data frame sent\nI0626 22:07:44.074881 2345 log.go:172] (0xc000a026e0) Data frame received for 3\nI0626 22:07:44.074912 2345 log.go:172] (0xc00057e6e0) (3) Data frame handling\nI0626 22:07:44.074933 2345 log.go:172] (0xc00057e6e0) (3) Data frame sent\nI0626 22:07:44.075540 2345 log.go:172] (0xc000a026e0) Data frame received for 5\nI0626 22:07:44.075572 2345 log.go:172] (0xc0007a7400) (5) Data frame handling\nI0626 22:07:44.075598 2345 log.go:172] (0xc000a026e0) Data frame received for 3\nI0626 22:07:44.075613 2345 log.go:172] (0xc00057e6e0) (3) Data frame handling\nI0626 22:07:44.077848 2345 log.go:172] (0xc000a026e0) Data frame received for 1\nI0626 22:07:44.077860 2345 log.go:172] (0xc0009401e0) (1) Data frame handling\nI0626 22:07:44.077867 2345 log.go:172] (0xc0009401e0) (1) Data frame sent\nI0626 22:07:44.077874 2345 log.go:172] (0xc000a026e0) (0xc0009401e0) Stream removed, broadcasting: 1\nI0626 22:07:44.078176 2345 log.go:172] (0xc000a026e0) (0xc0009401e0) Stream removed, broadcasting: 1\nI0626 22:07:44.078188 2345 log.go:172] (0xc000a026e0) (0xc00057e6e0) Stream removed, broadcasting: 3\nI0626 22:07:44.078194 2345 log.go:172] (0xc000a026e0) (0xc0007a7400) Stream removed, broadcasting: 5\n" Jun 26 22:07:44.084: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4942.svc.cluster.local\tcanonical name = externalsvc.services-4942.svc.cluster.local.\nName:\texternalsvc.services-4942.svc.cluster.local\nAddress: 10.110.25.161\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4942, will wait for the garbage collector to delete the pods Jun 26 22:07:44.145: INFO: Deleting ReplicationController externalsvc took: 6.84734ms Jun 26 22:07:44.246: INFO: Terminating ReplicationController externalsvc pods took: 100.21914ms Jun 26 22:07:59.575: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:07:59.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4942" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.404 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":206,"skipped":3615,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:07:59.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 26 22:08:04.272: INFO: Successfully updated pod "labelsupdate767925f1-436a-4187-b131-b50e2c9776dc" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:08:06.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7643" for this suite. • [SLOW TEST:6.642 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3634,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:08:06.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:08:06.383: INFO: Create a RollingUpdate DaemonSet Jun 26 22:08:06.386: INFO: Check that daemon pods launch on every node of the cluster Jun 26 22:08:06.390: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:06.395: INFO: Number of nodes with available pods: 0 Jun 26 22:08:06.395: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:08:07.401: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:07.404: INFO: Number of nodes with available pods: 0 Jun 26 22:08:07.404: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:08:08.400: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:08.403: INFO: Number of nodes with available pods: 0 Jun 26 22:08:08.404: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:08:09.400: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:09.408: INFO: Number of nodes with available pods: 0 Jun 26 22:08:09.408: INFO: Node jerma-worker is running more than one daemon pod Jun 26 22:08:10.401: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:10.405: INFO: Number of nodes with available pods: 1 Jun 26 22:08:10.405: INFO: Node jerma-worker2 is running more than one daemon pod Jun 26 22:08:11.399: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:11.406: INFO: Number of nodes with available pods: 2 Jun 26 22:08:11.406: INFO: Number of running nodes: 2, number of available pods: 2 Jun 26 22:08:11.406: INFO: Update the DaemonSet to trigger a rollout Jun 26 22:08:11.414: INFO: Updating DaemonSet daemon-set Jun 26 22:08:19.428: INFO: Roll back the DaemonSet before rollout is complete Jun 26 22:08:19.434: INFO: Updating DaemonSet daemon-set Jun 26 22:08:19.434: INFO: Make sure DaemonSet rollback is complete Jun 26 22:08:19.454: INFO: Wrong image for pod: daemon-set-bmvnh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 26 22:08:19.454: INFO: Pod daemon-set-bmvnh is not available Jun 26 22:08:19.461: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:20.466: INFO: Wrong image for pod: daemon-set-bmvnh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 26 22:08:20.466: INFO: Pod daemon-set-bmvnh is not available Jun 26 22:08:20.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:21.465: INFO: Wrong image for pod: daemon-set-bmvnh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 26 22:08:21.465: INFO: Pod daemon-set-bmvnh is not available Jun 26 22:08:21.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:22.466: INFO: Wrong image for pod: daemon-set-bmvnh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 26 22:08:22.466: INFO: Pod daemon-set-bmvnh is not available Jun 26 22:08:22.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:23.466: INFO: Wrong image for pod: daemon-set-bmvnh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 26 22:08:23.466: INFO: Pod daemon-set-bmvnh is not available Jun 26 22:08:23.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 22:08:24.466: INFO: Pod daemon-set-lh9s5 is not available Jun 26 22:08:24.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-448, will wait for the garbage collector to delete the pods Jun 26 22:08:24.535: INFO: Deleting DaemonSet.extensions daemon-set took: 7.169955ms Jun 26 22:08:24.835: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.237218ms Jun 26 22:08:28.038: INFO: Number of nodes with available pods: 0 Jun 26 22:08:28.038: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 22:08:28.040: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-448/daemonsets","resourceVersion":"27549206"},"items":null} Jun 26 22:08:28.042: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-448/pods","resourceVersion":"27549206"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:08:28.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-448" for this suite. • [SLOW TEST:21.758 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":208,"skipped":3644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:08:28.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 26 22:08:28.139: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8836 /api/v1/namespaces/watch-8836/configmaps/e2e-watch-test-watch-closed 596a75d2-6be9-4a96-8101-825b66202c0c 27549213 0 2020-06-26 22:08:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 26 22:08:28.139: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8836 /api/v1/namespaces/watch-8836/configmaps/e2e-watch-test-watch-closed 596a75d2-6be9-4a96-8101-825b66202c0c 27549214 0 2020-06-26 22:08:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 26 22:08:28.150: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8836 /api/v1/namespaces/watch-8836/configmaps/e2e-watch-test-watch-closed 596a75d2-6be9-4a96-8101-825b66202c0c 27549215 0 2020-06-26 22:08:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 26 22:08:28.150: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8836 /api/v1/namespaces/watch-8836/configmaps/e2e-watch-test-watch-closed 596a75d2-6be9-4a96-8101-825b66202c0c 27549216 0 2020-06-26 22:08:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:08:28.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8836" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":209,"skipped":3678,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:08:28.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 26 22:08:28.252: INFO: Waiting up to 5m0s for pod "pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093" in namespace "emptydir-2189" to be "success or failure" Jun 26 22:08:28.268: INFO: Pod "pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093": Phase="Pending", Reason="", readiness=false. Elapsed: 15.68144ms Jun 26 22:08:30.274: INFO: Pod "pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02105432s Jun 26 22:08:32.278: INFO: Pod "pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025372921s STEP: Saw pod success Jun 26 22:08:32.278: INFO: Pod "pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093" satisfied condition "success or failure" Jun 26 22:08:32.281: INFO: Trying to get logs from node jerma-worker2 pod pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093 container test-container: STEP: delete the pod Jun 26 22:08:32.300: INFO: Waiting for pod pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093 to disappear Jun 26 22:08:32.304: INFO: Pod pod-c9a3f420-57cb-49d5-9c4b-7b44d2644093 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:08:32.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2189" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:08:32.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:08:32.428: INFO: Creating deployment "test-recreate-deployment" Jun 26 22:08:32.474: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 26 22:08:32.511: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 26 22:08:34.518: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 26 22:08:34.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806112, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806112, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806112, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806112, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 22:08:36.525: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 26 22:08:36.532: INFO: Updating deployment test-recreate-deployment Jun 26 22:08:36.532: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 26 22:08:36.976: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7782 /apis/apps/v1/namespaces/deployment-7782/deployments/test-recreate-deployment cf8526e9-8c10-4dca-bbe2-faf36bea00bb 27549334 2 2020-06-26 22:08:32 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002277248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-26 22:08:36 +0000 UTC,LastTransitionTime:2020-06-26 22:08:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-06-26 22:08:36 +0000 UTC,LastTransitionTime:2020-06-26 22:08:32 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 26 22:08:36.981: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7782 /apis/apps/v1/namespaces/deployment-7782/replicasets/test-recreate-deployment-5f94c574ff 42c1c28b-b58c-48ce-a340-b65c549bc5b4 27549331 1 2020-06-26 22:08:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment cf8526e9-8c10-4dca-bbe2-faf36bea00bb 0xc000aa3e67 0xc000aa3e68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000aa3ec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 22:08:36.981: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 26 22:08:36.981: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-7782 /apis/apps/v1/namespaces/deployment-7782/replicasets/test-recreate-deployment-799c574856 9025d6f0-8586-44be-88ee-30490a8320b5 27549323 2 2020-06-26 22:08:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment cf8526e9-8c10-4dca-bbe2-faf36bea00bb 0xc000aa3f37 0xc000aa3f38}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000aa3fa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 22:08:37.008: INFO: Pod "test-recreate-deployment-5f94c574ff-s984k" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-s984k test-recreate-deployment-5f94c574ff- deployment-7782 /api/v1/namespaces/deployment-7782/pods/test-recreate-deployment-5f94c574ff-s984k b15de83a-935d-4493-8b60-96515728f894 27549335 0 2020-06-26 22:08:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 42c1c28b-b58c-48ce-a340-b65c549bc5b4 0xc004cfbbe7 0xc004cfbbe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zxl4s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zxl4s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zxl4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:08:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:08:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:08:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 22:08:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-26 22:08:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:08:37.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7782" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":211,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:08:37.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 26 22:08:47.234: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 22:08:47.279: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 22:08:49.279: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 22:08:49.284: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 22:08:51.279: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 22:08:51.284: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:08:51.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3392" for this suite. • [SLOW TEST:14.277 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:08:51.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jun 26 22:08:51.338: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:09:06.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6178" for this suite. • [SLOW TEST:15.148 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":213,"skipped":3784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:09:06.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-24521906-1244-463a-b9e3-66e328dfed47 STEP: Creating a pod to test consume secrets Jun 26 22:09:06.562: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f" in namespace "projected-1811" to be "success or failure" Jun 26 22:09:06.569: INFO: Pod "pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.184614ms Jun 26 22:09:08.646: INFO: Pod "pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083894249s Jun 26 22:09:10.650: INFO: Pod "pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f": Phase="Running", Reason="", readiness=true. Elapsed: 4.0883808s Jun 26 22:09:12.655: INFO: Pod "pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092908637s STEP: Saw pod success Jun 26 22:09:12.655: INFO: Pod "pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f" satisfied condition "success or failure" Jun 26 22:09:12.658: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f container secret-volume-test: STEP: delete the pod Jun 26 22:09:12.679: INFO: Waiting for pod pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f to disappear Jun 26 22:09:12.682: INFO: Pod pod-projected-secrets-3f67606c-e235-486e-b2f5-5f88a1fd613f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:09:12.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1811" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3810,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:09:12.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-472743d0-ba2f-47ff-a0bf-4578e22221fa STEP: Creating secret with name s-test-opt-upd-ec106947-5a26-4387-b574-ee8c7fa7d56d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-472743d0-ba2f-47ff-a0bf-4578e22221fa STEP: Updating secret s-test-opt-upd-ec106947-5a26-4387-b574-ee8c7fa7d56d STEP: Creating secret with name s-test-opt-create-428695f4-1cbc-4612-9392-14b9ac868a69 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:10:37.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4828" for this suite. • [SLOW TEST:84.640 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3818,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:10:37.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jun 26 22:10:37.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9132' Jun 26 22:10:37.718: INFO: stderr: "" Jun 26 22:10:37.718: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 22:10:37.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9132' Jun 26 22:10:37.832: INFO: stderr: "" Jun 26 22:10:37.832: INFO: stdout: "update-demo-nautilus-r6fqf update-demo-nautilus-tglnl " Jun 26 22:10:37.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6fqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:37.920: INFO: stderr: "" Jun 26 22:10:37.920: INFO: stdout: "" Jun 26 22:10:37.920: INFO: update-demo-nautilus-r6fqf is created but not running Jun 26 22:10:42.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9132' Jun 26 22:10:43.023: INFO: stderr: "" Jun 26 22:10:43.023: INFO: stdout: "update-demo-nautilus-r6fqf update-demo-nautilus-tglnl " Jun 26 22:10:43.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6fqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:43.110: INFO: stderr: "" Jun 26 22:10:43.110: INFO: stdout: "true" Jun 26 22:10:43.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6fqf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:43.198: INFO: stderr: "" Jun 26 22:10:43.198: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 22:10:43.198: INFO: validating pod update-demo-nautilus-r6fqf Jun 26 22:10:43.215: INFO: got data: { "image": "nautilus.jpg" } Jun 26 22:10:43.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 22:10:43.215: INFO: update-demo-nautilus-r6fqf is verified up and running Jun 26 22:10:43.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tglnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:43.307: INFO: stderr: "" Jun 26 22:10:43.307: INFO: stdout: "true" Jun 26 22:10:43.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tglnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:43.401: INFO: stderr: "" Jun 26 22:10:43.401: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 22:10:43.401: INFO: validating pod update-demo-nautilus-tglnl Jun 26 22:10:43.419: INFO: got data: { "image": "nautilus.jpg" } Jun 26 22:10:43.419: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 22:10:43.419: INFO: update-demo-nautilus-tglnl is verified up and running STEP: scaling down the replication controller Jun 26 22:10:43.442: INFO: scanned /root for discovery docs: Jun 26 22:10:43.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9132' Jun 26 22:10:44.555: INFO: stderr: "" Jun 26 22:10:44.555: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 22:10:44.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9132' Jun 26 22:10:44.652: INFO: stderr: "" Jun 26 22:10:44.653: INFO: stdout: "update-demo-nautilus-r6fqf update-demo-nautilus-tglnl " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 26 22:10:49.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9132' Jun 26 22:10:49.768: INFO: stderr: "" Jun 26 22:10:49.768: INFO: stdout: "update-demo-nautilus-tglnl " Jun 26 22:10:49.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tglnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:49.856: INFO: stderr: "" Jun 26 22:10:49.856: INFO: stdout: "true" Jun 26 22:10:49.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tglnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:49.947: INFO: stderr: "" Jun 26 22:10:49.947: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 22:10:49.947: INFO: validating pod update-demo-nautilus-tglnl Jun 26 22:10:49.951: INFO: got data: { "image": "nautilus.jpg" } Jun 26 22:10:49.951: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 22:10:49.951: INFO: update-demo-nautilus-tglnl is verified up and running STEP: scaling up the replication controller Jun 26 22:10:49.954: INFO: scanned /root for discovery docs: Jun 26 22:10:49.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9132' Jun 26 22:10:51.092: INFO: stderr: "" Jun 26 22:10:51.092: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 22:10:51.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9132' Jun 26 22:10:51.190: INFO: stderr: "" Jun 26 22:10:51.190: INFO: stdout: "update-demo-nautilus-dtjgz update-demo-nautilus-tglnl " Jun 26 22:10:51.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtjgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:51.276: INFO: stderr: "" Jun 26 22:10:51.276: INFO: stdout: "" Jun 26 22:10:51.276: INFO: update-demo-nautilus-dtjgz is created but not running Jun 26 22:10:56.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9132' Jun 26 22:10:56.376: INFO: stderr: "" Jun 26 22:10:56.376: INFO: stdout: "update-demo-nautilus-dtjgz update-demo-nautilus-tglnl " Jun 26 22:10:56.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtjgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:56.479: INFO: stderr: "" Jun 26 22:10:56.479: INFO: stdout: "true" Jun 26 22:10:56.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtjgz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:56.570: INFO: stderr: "" Jun 26 22:10:56.570: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 22:10:56.570: INFO: validating pod update-demo-nautilus-dtjgz Jun 26 22:10:56.574: INFO: got data: { "image": "nautilus.jpg" } Jun 26 22:10:56.574: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 22:10:56.574: INFO: update-demo-nautilus-dtjgz is verified up and running Jun 26 22:10:56.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tglnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:56.679: INFO: stderr: "" Jun 26 22:10:56.679: INFO: stdout: "true" Jun 26 22:10:56.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tglnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9132' Jun 26 22:10:56.775: INFO: stderr: "" Jun 26 22:10:56.775: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 22:10:56.775: INFO: validating pod update-demo-nautilus-tglnl Jun 26 22:10:56.795: INFO: got data: { "image": "nautilus.jpg" } Jun 26 22:10:56.795: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 22:10:56.795: INFO: update-demo-nautilus-tglnl is verified up and running STEP: using delete to clean up resources Jun 26 22:10:56.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9132' Jun 26 22:10:56.893: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 22:10:56.893: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 26 22:10:56.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9132' Jun 26 22:10:56.992: INFO: stderr: "No resources found in kubectl-9132 namespace.\n" Jun 26 22:10:56.992: INFO: stdout: "" Jun 26 22:10:56.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9132 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 26 22:10:57.097: INFO: stderr: "" Jun 26 22:10:57.097: INFO: stdout: "update-demo-nautilus-dtjgz\nupdate-demo-nautilus-tglnl\n" Jun 26 22:10:57.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9132' Jun 26 22:10:57.691: INFO: stderr: "No resources found in kubectl-9132 namespace.\n" Jun 26 22:10:57.691: INFO: stdout: "" Jun 26 22:10:57.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9132 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 26 22:10:57.959: INFO: stderr: "" Jun 26 22:10:57.959: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:10:57.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9132" for this suite. • [SLOW TEST:20.637 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":216,"skipped":3844,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:10:57.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 26 22:10:58.123: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:11:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8680" for this suite. • [SLOW TEST:11.311 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:11:09.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:11:13.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-126" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3866,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:11:13.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-8ba5f1ba-48d8-4fb1-915c-e36d12eb1596 STEP: Creating a pod to test consume secrets Jun 26 22:11:13.436: INFO: Waiting up to 5m0s for pod "pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542" in namespace "secrets-1525" to be "success or failure" Jun 26 22:11:13.440: INFO: Pod "pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542": Phase="Pending", Reason="", readiness=false. Elapsed: 3.894145ms Jun 26 22:11:15.444: INFO: Pod "pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007853018s Jun 26 22:11:17.446: INFO: Pod "pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010717949s STEP: Saw pod success Jun 26 22:11:17.446: INFO: Pod "pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542" satisfied condition "success or failure" Jun 26 22:11:17.449: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542 container secret-volume-test: STEP: delete the pod Jun 26 22:11:17.465: INFO: Waiting for pod pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542 to disappear Jun 26 22:11:17.469: INFO: Pod pod-secrets-2ec35d43-075c-42d7-8931-b6da724b6542 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:11:17.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1525" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3870,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:11:17.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-f7t7 STEP: Creating a pod to test atomic-volume-subpath Jun 26 22:11:17.671: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-f7t7" in namespace "subpath-6170" to be "success or failure" Jun 26 22:11:17.679: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.75978ms Jun 26 22:11:19.683: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011783044s Jun 26 22:11:21.687: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 4.016073211s Jun 26 22:11:23.690: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 6.019174777s Jun 26 22:11:25.694: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 8.023200267s Jun 26 22:11:27.698: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 10.02730918s Jun 26 22:11:29.703: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 12.032150627s Jun 26 22:11:31.708: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 14.036487982s Jun 26 22:11:33.711: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 16.040257692s Jun 26 22:11:35.716: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 18.044635739s Jun 26 22:11:37.719: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 20.047475213s Jun 26 22:11:39.722: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Running", Reason="", readiness=true. Elapsed: 22.050942729s Jun 26 22:11:41.777: INFO: Pod "pod-subpath-test-projected-f7t7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.106360291s STEP: Saw pod success Jun 26 22:11:41.778: INFO: Pod "pod-subpath-test-projected-f7t7" satisfied condition "success or failure" Jun 26 22:11:41.781: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-f7t7 container test-container-subpath-projected-f7t7: STEP: delete the pod Jun 26 22:11:42.001: INFO: Waiting for pod pod-subpath-test-projected-f7t7 to disappear Jun 26 22:11:42.009: INFO: Pod pod-subpath-test-projected-f7t7 no longer exists STEP: Deleting pod pod-subpath-test-projected-f7t7 Jun 26 22:11:42.009: INFO: Deleting pod "pod-subpath-test-projected-f7t7" in namespace "subpath-6170" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:11:42.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6170" for this suite. • [SLOW TEST:24.542 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":220,"skipped":3872,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:11:42.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 26 22:11:42.219: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 26 22:11:47.222: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:11:47.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2325" for this suite. • [SLOW TEST:5.285 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":221,"skipped":3891,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:11:47.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:11:47.524: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ce71af38-6268-4e47-86b4-d85a01873a59", Controller:(*bool)(0xc003c958b2), BlockOwnerDeletion:(*bool)(0xc003c958b3)}} Jun 26 22:11:47.557: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0d528bc5-d2db-4dc7-a16d-33b4dec9140c", Controller:(*bool)(0xc003cc6072), BlockOwnerDeletion:(*bool)(0xc003cc6073)}} Jun 26 22:11:47.572: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ec0bc5e5-4d8e-4dd3-9aab-9840b33150d9", Controller:(*bool)(0xc003c95a5a), BlockOwnerDeletion:(*bool)(0xc003c95a5b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:11:52.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3923" for this suite. • [SLOW TEST:5.356 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":222,"skipped":3894,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:11:52.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 26 22:11:53.000: INFO: Waiting up to 5m0s for pod "downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402" in namespace "downward-api-2943" to be "success or failure" Jun 26 22:11:53.034: INFO: Pod "downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402": Phase="Pending", Reason="", readiness=false. Elapsed: 34.055695ms Jun 26 22:11:55.038: INFO: Pod "downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038049497s Jun 26 22:11:57.042: INFO: Pod "downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041830648s STEP: Saw pod success Jun 26 22:11:57.042: INFO: Pod "downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402" satisfied condition "success or failure" Jun 26 22:11:57.045: INFO: Trying to get logs from node jerma-worker2 pod downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402 container dapi-container: STEP: delete the pod Jun 26 22:11:57.066: INFO: Waiting for pod downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402 to disappear Jun 26 22:11:57.070: INFO: Pod downward-api-27c6ee19-5cf4-4c33-a57c-964d05094402 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:11:57.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2943" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3905,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:11:57.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5436 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5436 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5436.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5436.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 136.84.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.84.136_udp@PTR;check="$$(dig +tcp +noall +answer +search 136.84.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.84.136_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5436 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5436 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5436.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5436.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5436.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5436.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 136.84.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.84.136_udp@PTR;check="$$(dig +tcp +noall +answer +search 136.84.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.84.136_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 22:12:03.303: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.306: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.309: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.313: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.317: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.320: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.324: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.327: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.350: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.354: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.357: INFO: Unable to read jessie_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.361: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.364: INFO: Unable to read jessie_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.368: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.371: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.374: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:03.392: INFO: Lookups using dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5436 wheezy_tcp@dns-test-service.dns-5436 wheezy_udp@dns-test-service.dns-5436.svc wheezy_tcp@dns-test-service.dns-5436.svc wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5436 jessie_tcp@dns-test-service.dns-5436 jessie_udp@dns-test-service.dns-5436.svc jessie_tcp@dns-test-service.dns-5436.svc jessie_udp@_http._tcp.dns-test-service.dns-5436.svc jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc] Jun 26 22:12:08.397: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.401: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.404: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.408: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.412: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.414: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.418: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.421: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.440: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.443: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.446: INFO: Unable to read jessie_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.448: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.451: INFO: Unable to read jessie_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.454: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.457: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.459: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:08.477: INFO: Lookups using dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5436 wheezy_tcp@dns-test-service.dns-5436 wheezy_udp@dns-test-service.dns-5436.svc wheezy_tcp@dns-test-service.dns-5436.svc wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5436 jessie_tcp@dns-test-service.dns-5436 jessie_udp@dns-test-service.dns-5436.svc jessie_tcp@dns-test-service.dns-5436.svc jessie_udp@_http._tcp.dns-test-service.dns-5436.svc jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc] Jun 26 22:12:13.398: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.401: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.404: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.407: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.410: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.412: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.415: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.417: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.433: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.435: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.437: INFO: Unable to read jessie_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.439: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.441: INFO: Unable to read jessie_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.443: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.446: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.448: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:13.464: INFO: Lookups using dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5436 wheezy_tcp@dns-test-service.dns-5436 wheezy_udp@dns-test-service.dns-5436.svc wheezy_tcp@dns-test-service.dns-5436.svc wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5436 jessie_tcp@dns-test-service.dns-5436 jessie_udp@dns-test-service.dns-5436.svc jessie_tcp@dns-test-service.dns-5436.svc jessie_udp@_http._tcp.dns-test-service.dns-5436.svc jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc] Jun 26 22:12:18.398: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.402: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.405: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.409: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.413: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.417: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.420: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.424: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.443: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.446: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.449: INFO: Unable to read jessie_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.451: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.454: INFO: Unable to read jessie_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.456: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.459: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.462: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:18.484: INFO: Lookups using dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5436 wheezy_tcp@dns-test-service.dns-5436 wheezy_udp@dns-test-service.dns-5436.svc wheezy_tcp@dns-test-service.dns-5436.svc wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5436 jessie_tcp@dns-test-service.dns-5436 jessie_udp@dns-test-service.dns-5436.svc jessie_tcp@dns-test-service.dns-5436.svc jessie_udp@_http._tcp.dns-test-service.dns-5436.svc jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc] Jun 26 22:12:23.397: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.400: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.402: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.405: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.407: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.410: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.412: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.416: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.441: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.444: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.447: INFO: Unable to read jessie_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.450: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.453: INFO: Unable to read jessie_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.459: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.462: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:23.478: INFO: Lookups using dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5436 wheezy_tcp@dns-test-service.dns-5436 wheezy_udp@dns-test-service.dns-5436.svc wheezy_tcp@dns-test-service.dns-5436.svc wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5436 jessie_tcp@dns-test-service.dns-5436 jessie_udp@dns-test-service.dns-5436.svc jessie_tcp@dns-test-service.dns-5436.svc jessie_udp@_http._tcp.dns-test-service.dns-5436.svc jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc] Jun 26 22:12:28.398: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.402: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.405: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.410: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.413: INFO: Unable to read wheezy_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.416: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.420: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.423: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.441: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.444: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.446: INFO: Unable to read jessie_udp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.449: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436 from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.452: INFO: Unable to read jessie_udp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.458: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.461: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc from pod dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23: the server could not find the requested resource (get pods dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23) Jun 26 22:12:28.478: INFO: Lookups using dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5436 wheezy_tcp@dns-test-service.dns-5436 wheezy_udp@dns-test-service.dns-5436.svc wheezy_tcp@dns-test-service.dns-5436.svc wheezy_udp@_http._tcp.dns-test-service.dns-5436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5436 jessie_tcp@dns-test-service.dns-5436 jessie_udp@dns-test-service.dns-5436.svc jessie_tcp@dns-test-service.dns-5436.svc jessie_udp@_http._tcp.dns-test-service.dns-5436.svc jessie_tcp@_http._tcp.dns-test-service.dns-5436.svc] Jun 26 22:12:33.483: INFO: DNS probes using dns-5436/dns-test-1622df0f-1591-439b-ad8f-f8969a3ead23 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:12:34.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5436" for this suite. • [SLOW TEST:37.366 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":224,"skipped":3913,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:12:34.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:12:34.544: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:12:38.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2236" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3920,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:12:38.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 26 22:12:38.749: INFO: Waiting up to 5m0s for pod "pod-1708ce34-bc38-4303-b5c7-c9872f374ea1" in namespace "emptydir-4409" to be "success or failure" Jun 26 22:12:38.753: INFO: Pod "pod-1708ce34-bc38-4303-b5c7-c9872f374ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.444717ms Jun 26 22:12:40.772: INFO: Pod "pod-1708ce34-bc38-4303-b5c7-c9872f374ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02281489s Jun 26 22:12:42.862: INFO: Pod "pod-1708ce34-bc38-4303-b5c7-c9872f374ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11274128s STEP: Saw pod success Jun 26 22:12:42.862: INFO: Pod "pod-1708ce34-bc38-4303-b5c7-c9872f374ea1" satisfied condition "success or failure" Jun 26 22:12:42.866: INFO: Trying to get logs from node jerma-worker pod pod-1708ce34-bc38-4303-b5c7-c9872f374ea1 container test-container: STEP: delete the pod Jun 26 22:12:43.074: INFO: Waiting for pod pod-1708ce34-bc38-4303-b5c7-c9872f374ea1 to disappear Jun 26 22:12:43.122: INFO: Pod pod-1708ce34-bc38-4303-b5c7-c9872f374ea1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:12:43.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4409" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3920,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:12:43.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-fa465f29-a1c5-4c34-b2c8-2c7257128101 STEP: Creating secret with name secret-projected-all-test-volume-f7060549-a7c2-4bdc-a773-51d061c52702 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 26 22:12:43.367: INFO: Waiting up to 5m0s for pod "projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083" in namespace "projected-5021" to be "success or failure" Jun 26 22:12:43.370: INFO: Pod "projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083": Phase="Pending", Reason="", readiness=false. Elapsed: 3.538609ms Jun 26 22:12:45.374: INFO: Pod "projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007553447s Jun 26 22:12:47.395: INFO: Pod "projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02784194s STEP: Saw pod success Jun 26 22:12:47.395: INFO: Pod "projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083" satisfied condition "success or failure" Jun 26 22:12:47.397: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083 container projected-all-volume-test: STEP: delete the pod Jun 26 22:12:47.413: INFO: Waiting for pod projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083 to disappear Jun 26 22:12:47.418: INFO: Pod projected-volume-d78e18c1-379d-4865-b14a-0d2cda8d6083 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:12:47.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5021" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3927,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:12:47.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jun 26 22:12:47.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3868' Jun 26 22:12:48.214: INFO: stderr: "" Jun 26 22:12:48.214: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 22:12:48.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3868' Jun 26 22:12:48.334: INFO: stderr: "" Jun 26 22:12:48.334: INFO: stdout: "update-demo-nautilus-4m4r7 update-demo-nautilus-6892n " Jun 26 22:12:48.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4m4r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:12:48.419: INFO: stderr: "" Jun 26 22:12:48.419: INFO: stdout: "" Jun 26 22:12:48.419: INFO: update-demo-nautilus-4m4r7 is created but not running Jun 26 22:12:53.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3868' Jun 26 22:12:53.517: INFO: stderr: "" Jun 26 22:12:53.517: INFO: stdout: "update-demo-nautilus-4m4r7 update-demo-nautilus-6892n " Jun 26 22:12:53.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4m4r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:12:53.617: INFO: stderr: "" Jun 26 22:12:53.618: INFO: stdout: "true" Jun 26 22:12:53.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4m4r7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:12:53.720: INFO: stderr: "" Jun 26 22:12:53.720: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 22:12:53.720: INFO: validating pod update-demo-nautilus-4m4r7 Jun 26 22:12:53.724: INFO: got data: { "image": "nautilus.jpg" } Jun 26 22:12:53.724: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 22:12:53.724: INFO: update-demo-nautilus-4m4r7 is verified up and running Jun 26 22:12:53.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6892n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:12:53.813: INFO: stderr: "" Jun 26 22:12:53.813: INFO: stdout: "true" Jun 26 22:12:53.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6892n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:12:53.906: INFO: stderr: "" Jun 26 22:12:53.906: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 22:12:53.906: INFO: validating pod update-demo-nautilus-6892n Jun 26 22:12:53.910: INFO: got data: { "image": "nautilus.jpg" } Jun 26 22:12:53.910: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 22:12:53.910: INFO: update-demo-nautilus-6892n is verified up and running STEP: rolling-update to new replication controller Jun 26 22:12:53.912: INFO: scanned /root for discovery docs: Jun 26 22:12:53.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3868' Jun 26 22:13:17.607: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 26 22:13:17.607: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 22:13:17.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3868' Jun 26 22:13:17.705: INFO: stderr: "" Jun 26 22:13:17.705: INFO: stdout: "update-demo-kitten-fhcbq update-demo-kitten-knn9l update-demo-nautilus-4m4r7 " STEP: Replicas for name=update-demo: expected=2 actual=3 Jun 26 22:13:22.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3868' Jun 26 22:13:22.810: INFO: stderr: "" Jun 26 22:13:22.810: INFO: stdout: "update-demo-kitten-fhcbq update-demo-kitten-knn9l " Jun 26 22:13:22.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fhcbq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:13:22.909: INFO: stderr: "" Jun 26 22:13:22.909: INFO: stdout: "true" Jun 26 22:13:22.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fhcbq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:13:22.998: INFO: stderr: "" Jun 26 22:13:22.998: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 26 22:13:22.998: INFO: validating pod update-demo-kitten-fhcbq Jun 26 22:13:23.010: INFO: got data: { "image": "kitten.jpg" } Jun 26 22:13:23.010: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 26 22:13:23.010: INFO: update-demo-kitten-fhcbq is verified up and running Jun 26 22:13:23.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-knn9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:13:23.110: INFO: stderr: "" Jun 26 22:13:23.111: INFO: stdout: "true" Jun 26 22:13:23.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-knn9l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3868' Jun 26 22:13:23.206: INFO: stderr: "" Jun 26 22:13:23.206: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 26 22:13:23.206: INFO: validating pod update-demo-kitten-knn9l Jun 26 22:13:23.220: INFO: got data: { "image": "kitten.jpg" } Jun 26 22:13:23.220: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 26 22:13:23.220: INFO: update-demo-kitten-knn9l is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:13:23.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3868" for this suite. • [SLOW TEST:35.803 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":228,"skipped":3947,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:13:23.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 26 22:13:28.401: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:13:28.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4426" for this suite. • [SLOW TEST:5.305 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":229,"skipped":3950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:13:28.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-dec9aee1-b373-4b08-9bec-4ba7bc0ca50f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:13:28.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8520" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":230,"skipped":3979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:13:28.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 22:13:30.055: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 22:13:32.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806409, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806409, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806410, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806409, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 22:13:35.224: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:13:35.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6442-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:13:36.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5598" for this suite. STEP: Destroying namespace "webhook-5598-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.499 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":231,"skipped":4003,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:13:36.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-67b50ed5-beac-48b0-9245-c5cadf9ab758 STEP: Creating a pod to test consume configMaps Jun 26 22:13:36.374: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c" in namespace "projected-986" to be "success or failure" Jun 26 22:13:36.510: INFO: Pod "pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c": Phase="Pending", Reason="", readiness=false. Elapsed: 136.175001ms Jun 26 22:13:38.513: INFO: Pod "pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139656339s Jun 26 22:13:40.518: INFO: Pod "pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c": Phase="Running", Reason="", readiness=true. Elapsed: 4.143992741s Jun 26 22:13:42.522: INFO: Pod "pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148674245s STEP: Saw pod success Jun 26 22:13:42.522: INFO: Pod "pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c" satisfied condition "success or failure" Jun 26 22:13:42.526: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c container projected-configmap-volume-test: STEP: delete the pod Jun 26 22:13:42.570: INFO: Waiting for pod pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c to disappear Jun 26 22:13:42.581: INFO: Pod pod-projected-configmaps-ea23858c-7cdd-4ee4-a579-bb560931391c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:13:42.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-986" for this suite. • [SLOW TEST:6.422 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":4021,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:13:42.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:13:42.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a" in namespace "projected-3468" to be "success or failure" Jun 26 22:13:42.700: INFO: Pod "downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.292548ms Jun 26 22:13:44.703: INFO: Pod "downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022959773s Jun 26 22:13:46.708: INFO: Pod "downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a": Phase="Running", Reason="", readiness=true. Elapsed: 4.027316019s Jun 26 22:13:48.712: INFO: Pod "downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031628902s STEP: Saw pod success Jun 26 22:13:48.712: INFO: Pod "downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a" satisfied condition "success or failure" Jun 26 22:13:48.715: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a container client-container: STEP: delete the pod Jun 26 22:13:48.734: INFO: Waiting for pod downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a to disappear Jun 26 22:13:48.738: INFO: Pod downwardapi-volume-87f66db0-53dd-45db-aa11-90bace13360a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:13:48.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3468" for this suite. • [SLOW TEST:6.186 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":4023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:13:48.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 26 22:13:48.837: INFO: Waiting up to 5m0s for pod "pod-39d0e683-ec2a-45a9-a328-3c714c0890fd" in namespace "emptydir-7717" to be "success or failure" Jun 26 22:13:48.853: INFO: Pod "pod-39d0e683-ec2a-45a9-a328-3c714c0890fd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.98051ms Jun 26 22:13:50.857: INFO: Pod "pod-39d0e683-ec2a-45a9-a328-3c714c0890fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020344939s Jun 26 22:13:52.860: INFO: Pod "pod-39d0e683-ec2a-45a9-a328-3c714c0890fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023563522s STEP: Saw pod success Jun 26 22:13:52.860: INFO: Pod "pod-39d0e683-ec2a-45a9-a328-3c714c0890fd" satisfied condition "success or failure" Jun 26 22:13:52.862: INFO: Trying to get logs from node jerma-worker2 pod pod-39d0e683-ec2a-45a9-a328-3c714c0890fd container test-container: STEP: delete the pod Jun 26 22:13:52.891: INFO: Waiting for pod pod-39d0e683-ec2a-45a9-a328-3c714c0890fd to disappear Jun 26 22:13:52.906: INFO: Pod pod-39d0e683-ec2a-45a9-a328-3c714c0890fd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:13:52.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7717" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":4061,"failed":0} ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:13:52.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-fddbf98d-280f-49a9-ad40-74f3fe31ab47 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-fddbf98d-280f-49a9-ad40-74f3fe31ab47 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:12.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-936" for this suite. • [SLOW TEST:79.343 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":4061,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:12.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:15:12.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2036' Jun 26 22:15:17.506: INFO: stderr: "" Jun 26 22:15:17.506: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jun 26 22:15:17.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2036' Jun 26 22:15:18.999: INFO: stderr: "" Jun 26 22:15:18.999: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 26 22:15:20.003: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 22:15:20.003: INFO: Found 0 / 1 Jun 26 22:15:21.003: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 22:15:21.003: INFO: Found 1 / 1 Jun 26 22:15:21.003: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 26 22:15:21.007: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 22:15:21.007: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 26 22:15:21.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-cc6vq --namespace=kubectl-2036' Jun 26 22:15:21.113: INFO: stderr: "" Jun 26 22:15:21.113: INFO: stdout: "Name: agnhost-master-cc6vq\nNamespace: kubectl-2036\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Fri, 26 Jun 2020 22:15:17 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.27\nIPs:\n IP: 10.244.2.27\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2ff4c78f93563ba33d13766d99eded09e369d562b5369005d0da80df6802bc47\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 26 Jun 2020 22:15:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-8vsp2 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-8vsp2:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-8vsp2\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-2036/agnhost-master-cc6vq to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Jun 26 22:15:21.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2036' Jun 26 22:15:21.232: INFO: stderr: "" Jun 26 22:15:21.232: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2036\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-cc6vq\n" Jun 26 22:15:21.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2036' Jun 26 22:15:21.341: INFO: stderr: "" Jun 26 22:15:21.341: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2036\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.107.240.243\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.27:6379\nSession Affinity: None\nEvents: \n" Jun 26 22:15:21.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Jun 26 22:15:21.482: INFO: stderr: "" Jun 26 22:15:21.482: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 26 Jun 2020 22:15:15 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 26 Jun 2020 22:14:10 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 26 Jun 2020 22:14:10 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 26 Jun 2020 22:14:10 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 26 Jun 2020 22:14:10 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 103d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 103d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 103d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 103d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 103d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 103d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 103d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 103d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 103d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 26 22:15:21.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2036' Jun 26 22:15:21.596: INFO: stderr: "" Jun 26 22:15:21.596: INFO: stdout: "Name: kubectl-2036\nLabels: e2e-framework=kubectl\n e2e-run=85baae3e-d6b4-4bc8-aba9-6d8fb2bb58ab\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:21.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2036" for this suite. • [SLOW TEST:9.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":236,"skipped":4071,"failed":0} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:21.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jun 26 22:15:21.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 26 22:15:21.976: INFO: stderr: "" Jun 26 22:15:21.976: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:21.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-528" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":237,"skipped":4071,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:21.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:33.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9956" for this suite. • [SLOW TEST:11.271 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":238,"skipped":4131,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:33.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-1df04a55-2310-4674-840e-4a1e9b4b8400 STEP: Creating a pod to test consume secrets Jun 26 22:15:33.346: INFO: Waiting up to 5m0s for pod "pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d" in namespace "secrets-892" to be "success or failure" Jun 26 22:15:33.365: INFO: Pod "pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.45812ms Jun 26 22:15:35.417: INFO: Pod "pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071641491s Jun 26 22:15:37.420: INFO: Pod "pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074486535s STEP: Saw pod success Jun 26 22:15:37.420: INFO: Pod "pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d" satisfied condition "success or failure" Jun 26 22:15:37.423: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d container secret-volume-test: STEP: delete the pod Jun 26 22:15:37.471: INFO: Waiting for pod pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d to disappear Jun 26 22:15:37.477: INFO: Pod pod-secrets-b86058f5-7a72-4896-a48d-fbac610d152d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:37.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-892" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4139,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:37.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 26 22:15:37.595: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7417 /api/v1/namespaces/watch-7417/configmaps/e2e-watch-test-resource-version bf499453-5ddd-48cc-85bf-bfc6a1fde85b 27551601 0 2020-06-26 22:15:37 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 26 22:15:37.595: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7417 /api/v1/namespaces/watch-7417/configmaps/e2e-watch-test-resource-version bf499453-5ddd-48cc-85bf-bfc6a1fde85b 27551602 0 2020-06-26 22:15:37 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:37.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7417" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":240,"skipped":4140,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:37.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:15:37.725: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3" in namespace "downward-api-5430" to be "success or failure" Jun 26 22:15:37.729: INFO: Pod "downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.709609ms Jun 26 22:15:39.741: INFO: Pod "downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016217133s Jun 26 22:15:41.745: INFO: Pod "downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019961304s STEP: Saw pod success Jun 26 22:15:41.745: INFO: Pod "downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3" satisfied condition "success or failure" Jun 26 22:15:41.748: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3 container client-container: STEP: delete the pod Jun 26 22:15:41.808: INFO: Waiting for pod downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3 to disappear Jun 26 22:15:41.819: INFO: Pod downwardapi-volume-fbf06856-6282-46e5-9968-c4f5724e59c3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:41.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5430" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4146,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:41.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:15:41.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba" in namespace "downward-api-8004" to be "success or failure" Jun 26 22:15:41.943: INFO: Pod "downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba": Phase="Pending", Reason="", readiness=false. Elapsed: 45.319708ms Jun 26 22:15:43.946: INFO: Pod "downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049207224s Jun 26 22:15:45.985: INFO: Pod "downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087726166s STEP: Saw pod success Jun 26 22:15:45.985: INFO: Pod "downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba" satisfied condition "success or failure" Jun 26 22:15:45.989: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba container client-container: STEP: delete the pod Jun 26 22:15:46.034: INFO: Waiting for pod downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba to disappear Jun 26 22:15:46.053: INFO: Pod downwardapi-volume-a2dfa567-f6d8-4ec4-b0dd-841bb36e15ba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:15:46.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8004" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:15:46.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2130 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 22:15:46.179: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 26 22:16:12.360: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.212 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2130 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:16:12.360: INFO: >>> kubeConfig: /root/.kube/config I0626 22:16:12.394249 6 log.go:172] (0xc0051d0630) (0xc001e2ed20) Create stream I0626 22:16:12.394274 6 log.go:172] (0xc0051d0630) (0xc001e2ed20) Stream added, broadcasting: 1 I0626 22:16:12.396789 6 log.go:172] (0xc0051d0630) Reply frame received for 1 I0626 22:16:12.396815 6 log.go:172] (0xc0051d0630) (0xc001e2ee60) Create stream I0626 22:16:12.396824 6 log.go:172] (0xc0051d0630) (0xc001e2ee60) Stream added, broadcasting: 3 I0626 22:16:12.398088 6 log.go:172] (0xc0051d0630) Reply frame received for 3 I0626 22:16:12.398144 6 log.go:172] (0xc0051d0630) (0xc000c959a0) Create stream I0626 22:16:12.398162 6 log.go:172] (0xc0051d0630) (0xc000c959a0) Stream added, broadcasting: 5 I0626 22:16:12.399469 6 log.go:172] (0xc0051d0630) Reply frame received for 5 I0626 22:16:13.528083 6 log.go:172] (0xc0051d0630) Data frame received for 3 I0626 22:16:13.528118 6 log.go:172] (0xc001e2ee60) (3) Data frame handling I0626 22:16:13.528142 6 log.go:172] (0xc001e2ee60) (3) Data frame sent I0626 22:16:13.528162 6 log.go:172] (0xc0051d0630) Data frame received for 3 I0626 22:16:13.528199 6 log.go:172] (0xc001e2ee60) (3) Data frame handling I0626 22:16:13.528443 6 log.go:172] (0xc0051d0630) Data frame received for 5 I0626 22:16:13.528465 6 log.go:172] (0xc000c959a0) (5) Data frame handling I0626 22:16:13.530508 6 log.go:172] (0xc0051d0630) Data frame received for 1 I0626 22:16:13.530528 6 log.go:172] (0xc001e2ed20) (1) Data frame handling I0626 22:16:13.530564 6 log.go:172] (0xc001e2ed20) (1) Data frame sent I0626 22:16:13.530580 6 log.go:172] (0xc0051d0630) (0xc001e2ed20) Stream removed, broadcasting: 1 I0626 22:16:13.530646 6 log.go:172] (0xc0051d0630) (0xc001e2ed20) Stream removed, broadcasting: 1 I0626 22:16:13.530656 6 log.go:172] (0xc0051d0630) (0xc001e2ee60) Stream removed, broadcasting: 3 I0626 22:16:13.530793 6 log.go:172] (0xc0051d0630) Go away received I0626 22:16:13.530956 6 log.go:172] (0xc0051d0630) (0xc000c959a0) Stream removed, broadcasting: 5 Jun 26 22:16:13.530: INFO: Found all expected endpoints: [netserver-0] Jun 26 22:16:13.534: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.30 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2130 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:16:13.534: INFO: >>> kubeConfig: /root/.kube/config I0626 22:16:13.564557 6 log.go:172] (0xc0017a8630) (0xc0026cc5a0) Create stream I0626 22:16:13.564597 6 log.go:172] (0xc0017a8630) (0xc0026cc5a0) Stream added, broadcasting: 1 I0626 22:16:13.566584 6 log.go:172] (0xc0017a8630) Reply frame received for 1 I0626 22:16:13.566620 6 log.go:172] (0xc0017a8630) (0xc0011b86e0) Create stream I0626 22:16:13.566635 6 log.go:172] (0xc0017a8630) (0xc0011b86e0) Stream added, broadcasting: 3 I0626 22:16:13.567531 6 log.go:172] (0xc0017a8630) Reply frame received for 3 I0626 22:16:13.567559 6 log.go:172] (0xc0017a8630) (0xc0026cc8c0) Create stream I0626 22:16:13.567569 6 log.go:172] (0xc0017a8630) (0xc0026cc8c0) Stream added, broadcasting: 5 I0626 22:16:13.568416 6 log.go:172] (0xc0017a8630) Reply frame received for 5 I0626 22:16:14.661064 6 log.go:172] (0xc0017a8630) Data frame received for 3 I0626 22:16:14.661108 6 log.go:172] (0xc0011b86e0) (3) Data frame handling I0626 22:16:14.661328 6 log.go:172] (0xc0011b86e0) (3) Data frame sent I0626 22:16:14.661344 6 log.go:172] (0xc0017a8630) Data frame received for 3 I0626 22:16:14.661362 6 log.go:172] (0xc0011b86e0) (3) Data frame handling I0626 22:16:14.662065 6 log.go:172] (0xc0017a8630) Data frame received for 5 I0626 22:16:14.662091 6 log.go:172] (0xc0026cc8c0) (5) Data frame handling I0626 22:16:14.666037 6 log.go:172] (0xc0017a8630) Data frame received for 1 I0626 22:16:14.666077 6 log.go:172] (0xc0026cc5a0) (1) Data frame handling I0626 22:16:14.666099 6 log.go:172] (0xc0026cc5a0) (1) Data frame sent I0626 22:16:14.666114 6 log.go:172] (0xc0017a8630) (0xc0026cc5a0) Stream removed, broadcasting: 1 I0626 22:16:14.666189 6 log.go:172] (0xc0017a8630) Go away received I0626 22:16:14.666374 6 log.go:172] (0xc0017a8630) (0xc0026cc5a0) Stream removed, broadcasting: 1 I0626 22:16:14.666434 6 log.go:172] (0xc0017a8630) (0xc0011b86e0) Stream removed, broadcasting: 3 I0626 22:16:14.666460 6 log.go:172] (0xc0017a8630) (0xc0026cc8c0) Stream removed, broadcasting: 5 Jun 26 22:16:14.666: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:16:14.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2130" for this suite. • [SLOW TEST:28.617 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4185,"failed":0} [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:16:14.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381 Jun 26 22:16:14.772: INFO: Pod name my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381: Found 0 pods out of 1 Jun 26 22:16:19.788: INFO: Pod name my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381: Found 1 pods out of 1 Jun 26 22:16:19.788: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381" are running Jun 26 22:16:19.806: INFO: Pod "my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381-tj2xx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:16:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:16:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:16:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 22:16:14 +0000 UTC Reason: Message:}]) Jun 26 22:16:19.806: INFO: Trying to dial the pod Jun 26 22:16:24.816: INFO: Controller my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381: Got expected result from replica 1 [my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381-tj2xx]: "my-hostname-basic-a5e05406-b10d-4291-b19a-76372bcde381-tj2xx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:16:24.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3558" for this suite. • [SLOW TEST:10.144 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":244,"skipped":4185,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:16:24.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-dea279ca-ee8a-4614-9345-93e4f302df1c STEP: Creating configMap with name cm-test-opt-upd-a091347f-43c3-49e5-aaf8-1a1b3714f418 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dea279ca-ee8a-4614-9345-93e4f302df1c STEP: Updating configmap cm-test-opt-upd-a091347f-43c3-49e5-aaf8-1a1b3714f418 STEP: Creating configMap with name cm-test-opt-create-59c06dd8-dda6-4f0e-820e-888db04cc61a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:17:43.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-52" for this suite. • [SLOW TEST:78.540 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4191,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:17:43.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 26 22:17:53.485: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 26 22:17:53.504: INFO: Pod pod-with-prestop-http-hook still exists Jun 26 22:17:55.504: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 26 22:17:55.509: INFO: Pod pod-with-prestop-http-hook still exists Jun 26 22:17:57.504: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 26 22:17:57.508: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:17:57.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9562" for this suite. • [SLOW TEST:14.170 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4196,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:17:57.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 22:17:57.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3381' Jun 26 22:17:57.705: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 26 22:17:57.705: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Jun 26 22:17:59.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3381' Jun 26 22:17:59.913: INFO: stderr: "" Jun 26 22:17:59.913: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:17:59.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3381" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":247,"skipped":4200,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:17:59.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:18:00.024: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:18:00.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9241" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":248,"skipped":4209,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:18:00.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:18:00.910: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 26 22:18:03.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5961 create -f -' Jun 26 22:18:07.238: INFO: stderr: "" Jun 26 22:18:07.238: INFO: stdout: "e2e-test-crd-publish-openapi-4552-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 26 22:18:07.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5961 delete e2e-test-crd-publish-openapi-4552-crds test-cr' Jun 26 22:18:07.354: INFO: stderr: "" Jun 26 22:18:07.354: INFO: stdout: "e2e-test-crd-publish-openapi-4552-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 26 22:18:07.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5961 apply -f -' Jun 26 22:18:10.071: INFO: stderr: "" Jun 26 22:18:10.071: INFO: stdout: "e2e-test-crd-publish-openapi-4552-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 26 22:18:10.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5961 delete e2e-test-crd-publish-openapi-4552-crds test-cr' Jun 26 22:18:10.179: INFO: stderr: "" Jun 26 22:18:10.179: INFO: stdout: "e2e-test-crd-publish-openapi-4552-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 26 22:18:10.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4552-crds' Jun 26 22:18:11.402: INFO: stderr: "" Jun 26 22:18:11.402: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4552-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:18:14.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5961" for this suite. • [SLOW TEST:13.483 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":249,"skipped":4211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:18:14.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 22:18:14.851: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 22:18:16.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 22:18:18.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728806694, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 22:18:21.951: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 26 22:18:21.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:18:23.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-434" for this suite. STEP: Destroying namespace "webhook-434-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.996 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":250,"skipped":4235,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:18:23.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:18:27.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8238" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4243,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:18:27.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-9057a158-e4ee-4323-900f-eed39990f217 STEP: Creating a pod to test consume secrets Jun 26 22:18:27.453: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877" in namespace "projected-1396" to be "success or failure" Jun 26 22:18:27.461: INFO: Pod "pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358481ms Jun 26 22:18:29.466: INFO: Pod "pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012675933s Jun 26 22:18:31.470: INFO: Pod "pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016895291s STEP: Saw pod success Jun 26 22:18:31.470: INFO: Pod "pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877" satisfied condition "success or failure" Jun 26 22:18:31.473: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877 container projected-secret-volume-test: STEP: delete the pod Jun 26 22:18:31.490: INFO: Waiting for pod pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877 to disappear Jun 26 22:18:31.519: INFO: Pod pod-projected-secrets-807e7a94-75af-45c9-8040-e053f0a6a877 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:18:31.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1396" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4252,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:18:31.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:18:31.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35" in namespace "projected-431" to be "success or failure" Jun 26 22:18:31.637: INFO: Pod "downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35": Phase="Pending", Reason="", readiness=false. Elapsed: 14.644463ms Jun 26 22:18:33.641: INFO: Pod "downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018231534s Jun 26 22:18:35.645: INFO: Pod "downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022523652s STEP: Saw pod success Jun 26 22:18:35.645: INFO: Pod "downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35" satisfied condition "success or failure" Jun 26 22:18:35.648: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35 container client-container: STEP: delete the pod Jun 26 22:18:35.670: INFO: Waiting for pod downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35 to disappear Jun 26 22:18:35.680: INFO: Pod downwardapi-volume-78d4a5dc-be70-43c1-8bc8-e9864514fb35 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:18:35.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-431" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4268,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:18:35.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-490 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-490 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-490 Jun 26 22:18:35.869: INFO: Found 0 stateful pods, waiting for 1 Jun 26 22:18:45.873: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 26 22:18:45.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:18:46.188: INFO: stderr: "I0626 22:18:46.022894 3550 log.go:172] (0xc0000f4fd0) (0xc0005c3b80) Create stream\nI0626 22:18:46.022964 3550 log.go:172] (0xc0000f4fd0) (0xc0005c3b80) Stream added, broadcasting: 1\nI0626 22:18:46.024816 3550 log.go:172] (0xc0000f4fd0) Reply frame received for 1\nI0626 22:18:46.024858 3550 log.go:172] (0xc0000f4fd0) (0xc0005c3c20) Create stream\nI0626 22:18:46.024868 3550 log.go:172] (0xc0000f4fd0) (0xc0005c3c20) Stream added, broadcasting: 3\nI0626 22:18:46.025797 3550 log.go:172] (0xc0000f4fd0) Reply frame received for 3\nI0626 22:18:46.025825 3550 log.go:172] (0xc0000f4fd0) (0xc000970000) Create stream\nI0626 22:18:46.025833 3550 log.go:172] (0xc0000f4fd0) (0xc000970000) Stream added, broadcasting: 5\nI0626 22:18:46.026797 3550 log.go:172] (0xc0000f4fd0) Reply frame received for 5\nI0626 22:18:46.127499 3550 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0626 22:18:46.127524 3550 log.go:172] (0xc000970000) (5) Data frame handling\nI0626 22:18:46.127539 3550 log.go:172] (0xc000970000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:18:46.180506 3550 log.go:172] (0xc0000f4fd0) Data frame received for 3\nI0626 22:18:46.180529 3550 log.go:172] (0xc0005c3c20) (3) Data frame handling\nI0626 22:18:46.180547 3550 log.go:172] (0xc0005c3c20) (3) Data frame sent\nI0626 22:18:46.180556 3550 log.go:172] (0xc0000f4fd0) Data frame received for 3\nI0626 22:18:46.180561 3550 log.go:172] (0xc0005c3c20) (3) Data frame handling\nI0626 22:18:46.180736 3550 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0626 22:18:46.180759 3550 log.go:172] (0xc000970000) (5) Data frame handling\nI0626 22:18:46.182618 3550 log.go:172] (0xc0000f4fd0) Data frame received for 1\nI0626 22:18:46.182647 3550 log.go:172] (0xc0005c3b80) (1) Data frame handling\nI0626 22:18:46.182673 3550 log.go:172] (0xc0005c3b80) (1) Data frame sent\nI0626 22:18:46.182694 3550 log.go:172] (0xc0000f4fd0) (0xc0005c3b80) Stream removed, broadcasting: 1\nI0626 22:18:46.182721 3550 log.go:172] (0xc0000f4fd0) Go away received\nI0626 22:18:46.183082 3550 log.go:172] (0xc0000f4fd0) (0xc0005c3b80) Stream removed, broadcasting: 1\nI0626 22:18:46.183109 3550 log.go:172] (0xc0000f4fd0) (0xc0005c3c20) Stream removed, broadcasting: 3\nI0626 22:18:46.183128 3550 log.go:172] (0xc0000f4fd0) (0xc000970000) Stream removed, broadcasting: 5\n" Jun 26 22:18:46.189: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:18:46.189: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:18:46.192: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 26 22:18:56.196: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:18:56.196: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 22:18:56.217: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:18:56.217: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC }] Jun 26 22:18:56.217: INFO: Jun 26 22:18:56.217: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 26 22:18:57.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987216269s Jun 26 22:18:58.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983284816s Jun 26 22:18:59.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.898630916s Jun 26 22:19:00.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.850686896s Jun 26 22:19:01.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.845724895s Jun 26 22:19:02.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.840667276s Jun 26 22:19:03.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.835717617s Jun 26 22:19:04.378: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.830913098s Jun 26 22:19:05.383: INFO: Verifying statefulset ss doesn't scale past 3 for another 825.686461ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-490 Jun 26 22:19:06.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:19:06.632: INFO: stderr: "I0626 22:19:06.522337 3574 log.go:172] (0xc0001076b0) (0xc0006b19a0) Create stream\nI0626 22:19:06.522402 3574 log.go:172] (0xc0001076b0) (0xc0006b19a0) Stream added, broadcasting: 1\nI0626 22:19:06.524945 3574 log.go:172] (0xc0001076b0) Reply frame received for 1\nI0626 22:19:06.525011 3574 log.go:172] (0xc0001076b0) (0xc0004d4000) Create stream\nI0626 22:19:06.525038 3574 log.go:172] (0xc0001076b0) (0xc0004d4000) Stream added, broadcasting: 3\nI0626 22:19:06.526201 3574 log.go:172] (0xc0001076b0) Reply frame received for 3\nI0626 22:19:06.526232 3574 log.go:172] (0xc0001076b0) (0xc0006b1b80) Create stream\nI0626 22:19:06.526239 3574 log.go:172] (0xc0001076b0) (0xc0006b1b80) Stream added, broadcasting: 5\nI0626 22:19:06.527390 3574 log.go:172] (0xc0001076b0) Reply frame received for 5\nI0626 22:19:06.625889 3574 log.go:172] (0xc0001076b0) Data frame received for 5\nI0626 22:19:06.625948 3574 log.go:172] (0xc0006b1b80) (5) Data frame handling\nI0626 22:19:06.625964 3574 log.go:172] (0xc0006b1b80) (5) Data frame sent\nI0626 22:19:06.625976 3574 log.go:172] (0xc0001076b0) Data frame received for 5\nI0626 22:19:06.625992 3574 log.go:172] (0xc0006b1b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 22:19:06.626007 3574 log.go:172] (0xc0001076b0) Data frame received for 3\nI0626 22:19:06.626041 3574 log.go:172] (0xc0004d4000) (3) Data frame handling\nI0626 22:19:06.626052 3574 log.go:172] (0xc0004d4000) (3) Data frame sent\nI0626 22:19:06.626059 3574 log.go:172] (0xc0001076b0) Data frame received for 3\nI0626 22:19:06.626065 3574 log.go:172] (0xc0004d4000) (3) Data frame handling\nI0626 22:19:06.627529 3574 log.go:172] (0xc0001076b0) Data frame received for 1\nI0626 22:19:06.627557 3574 log.go:172] (0xc0006b19a0) (1) Data frame handling\nI0626 22:19:06.627570 3574 log.go:172] (0xc0006b19a0) (1) Data frame sent\nI0626 22:19:06.627593 3574 log.go:172] (0xc0001076b0) (0xc0006b19a0) Stream removed, broadcasting: 1\nI0626 22:19:06.627645 3574 log.go:172] (0xc0001076b0) Go away received\nI0626 22:19:06.627943 3574 log.go:172] (0xc0001076b0) (0xc0006b19a0) Stream removed, broadcasting: 1\nI0626 22:19:06.627962 3574 log.go:172] (0xc0001076b0) (0xc0004d4000) Stream removed, broadcasting: 3\nI0626 22:19:06.627973 3574 log.go:172] (0xc0001076b0) (0xc0006b1b80) Stream removed, broadcasting: 5\n" Jun 26 22:19:06.633: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 22:19:06.633: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 22:19:06.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:19:06.870: INFO: stderr: "I0626 22:19:06.770805 3596 log.go:172] (0xc00010b290) (0xc0005d48c0) Create stream\nI0626 22:19:06.770857 3596 log.go:172] (0xc00010b290) (0xc0005d48c0) Stream added, broadcasting: 1\nI0626 22:19:06.774116 3596 log.go:172] (0xc00010b290) Reply frame received for 1\nI0626 22:19:06.774185 3596 log.go:172] (0xc00010b290) (0xc0002a9680) Create stream\nI0626 22:19:06.774218 3596 log.go:172] (0xc00010b290) (0xc0002a9680) Stream added, broadcasting: 3\nI0626 22:19:06.775377 3596 log.go:172] (0xc00010b290) Reply frame received for 3\nI0626 22:19:06.775416 3596 log.go:172] (0xc00010b290) (0xc0008f2000) Create stream\nI0626 22:19:06.775426 3596 log.go:172] (0xc00010b290) (0xc0008f2000) Stream added, broadcasting: 5\nI0626 22:19:06.776687 3596 log.go:172] (0xc00010b290) Reply frame received for 5\nI0626 22:19:06.847900 3596 log.go:172] (0xc00010b290) Data frame received for 5\nI0626 22:19:06.847936 3596 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0626 22:19:06.847960 3596 log.go:172] (0xc0008f2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 22:19:06.859325 3596 log.go:172] (0xc00010b290) Data frame received for 5\nI0626 22:19:06.859366 3596 log.go:172] (0xc00010b290) Data frame received for 3\nI0626 22:19:06.859404 3596 log.go:172] (0xc0002a9680) (3) Data frame handling\nI0626 22:19:06.859434 3596 log.go:172] (0xc0002a9680) (3) Data frame sent\nI0626 22:19:06.859630 3596 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0626 22:19:06.859715 3596 log.go:172] (0xc0008f2000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0626 22:19:06.859952 3596 log.go:172] (0xc00010b290) Data frame received for 3\nI0626 22:19:06.859977 3596 log.go:172] (0xc00010b290) Data frame received for 5\nI0626 22:19:06.860009 3596 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0626 22:19:06.860035 3596 log.go:172] (0xc0002a9680) (3) Data frame handling\nI0626 22:19:06.862208 3596 log.go:172] (0xc00010b290) Data frame received for 1\nI0626 22:19:06.862242 3596 log.go:172] (0xc0005d48c0) (1) Data frame handling\nI0626 22:19:06.862262 3596 log.go:172] (0xc0005d48c0) (1) Data frame sent\nI0626 22:19:06.862314 3596 log.go:172] (0xc00010b290) (0xc0005d48c0) Stream removed, broadcasting: 1\nI0626 22:19:06.862352 3596 log.go:172] (0xc00010b290) Go away received\nI0626 22:19:06.862774 3596 log.go:172] (0xc00010b290) (0xc0005d48c0) Stream removed, broadcasting: 1\nI0626 22:19:06.862795 3596 log.go:172] (0xc00010b290) (0xc0002a9680) Stream removed, broadcasting: 3\nI0626 22:19:06.862807 3596 log.go:172] (0xc00010b290) (0xc0008f2000) Stream removed, broadcasting: 5\n" Jun 26 22:19:06.870: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 22:19:06.870: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 22:19:06.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:19:07.087: INFO: stderr: "I0626 22:19:06.996025 3619 log.go:172] (0xc0000f4b00) (0xc0008e0000) Create stream\nI0626 22:19:06.996104 3619 log.go:172] (0xc0000f4b00) (0xc0008e0000) Stream added, broadcasting: 1\nI0626 22:19:06.999322 3619 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0626 22:19:06.999395 3619 log.go:172] (0xc0000f4b00) (0xc0006d3ae0) Create stream\nI0626 22:19:06.999421 3619 log.go:172] (0xc0000f4b00) (0xc0006d3ae0) Stream added, broadcasting: 3\nI0626 22:19:07.000530 3619 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0626 22:19:07.000562 3619 log.go:172] (0xc0000f4b00) (0xc00021a000) Create stream\nI0626 22:19:07.000576 3619 log.go:172] (0xc0000f4b00) (0xc00021a000) Stream added, broadcasting: 5\nI0626 22:19:07.001692 3619 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0626 22:19:07.078886 3619 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0626 22:19:07.078912 3619 log.go:172] (0xc00021a000) (5) Data frame handling\nI0626 22:19:07.078924 3619 log.go:172] (0xc00021a000) (5) Data frame sent\nI0626 22:19:07.078932 3619 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0626 22:19:07.078940 3619 log.go:172] (0xc00021a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0626 22:19:07.078963 3619 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0626 22:19:07.078971 3619 log.go:172] (0xc0006d3ae0) (3) Data frame handling\nI0626 22:19:07.078979 3619 log.go:172] (0xc0006d3ae0) (3) Data frame sent\nI0626 22:19:07.079091 3619 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0626 22:19:07.079133 3619 log.go:172] (0xc0006d3ae0) (3) Data frame handling\nI0626 22:19:07.080261 3619 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0626 22:19:07.080279 3619 log.go:172] (0xc0008e0000) (1) Data frame handling\nI0626 22:19:07.080291 3619 log.go:172] (0xc0008e0000) (1) Data frame sent\nI0626 22:19:07.080304 3619 log.go:172] (0xc0000f4b00) (0xc0008e0000) Stream removed, broadcasting: 1\nI0626 22:19:07.080315 3619 log.go:172] (0xc0000f4b00) Go away received\nI0626 22:19:07.080650 3619 log.go:172] (0xc0000f4b00) (0xc0008e0000) Stream removed, broadcasting: 1\nI0626 22:19:07.080669 3619 log.go:172] (0xc0000f4b00) (0xc0006d3ae0) Stream removed, broadcasting: 3\nI0626 22:19:07.080677 3619 log.go:172] (0xc0000f4b00) (0xc00021a000) Stream removed, broadcasting: 5\n" Jun 26 22:19:07.087: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 22:19:07.087: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 22:19:07.091: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 22:19:07.091: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 22:19:07.091: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 26 22:19:07.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:19:07.284: INFO: stderr: "I0626 22:19:07.224983 3639 log.go:172] (0xc0009106e0) (0xc00091e000) Create stream\nI0626 22:19:07.225042 3639 log.go:172] (0xc0009106e0) (0xc00091e000) Stream added, broadcasting: 1\nI0626 22:19:07.227024 3639 log.go:172] (0xc0009106e0) Reply frame received for 1\nI0626 22:19:07.227053 3639 log.go:172] (0xc0009106e0) (0xc000647cc0) Create stream\nI0626 22:19:07.227061 3639 log.go:172] (0xc0009106e0) (0xc000647cc0) Stream added, broadcasting: 3\nI0626 22:19:07.227903 3639 log.go:172] (0xc0009106e0) Reply frame received for 3\nI0626 22:19:07.227948 3639 log.go:172] (0xc0009106e0) (0xc00091e0a0) Create stream\nI0626 22:19:07.227965 3639 log.go:172] (0xc0009106e0) (0xc00091e0a0) Stream added, broadcasting: 5\nI0626 22:19:07.228580 3639 log.go:172] (0xc0009106e0) Reply frame received for 5\nI0626 22:19:07.276070 3639 log.go:172] (0xc0009106e0) Data frame received for 5\nI0626 22:19:07.276092 3639 log.go:172] (0xc00091e0a0) (5) Data frame handling\nI0626 22:19:07.276109 3639 log.go:172] (0xc00091e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:19:07.276172 3639 log.go:172] (0xc0009106e0) Data frame received for 5\nI0626 22:19:07.276189 3639 log.go:172] (0xc00091e0a0) (5) Data frame handling\nI0626 22:19:07.276206 3639 log.go:172] (0xc0009106e0) Data frame received for 3\nI0626 22:19:07.276223 3639 log.go:172] (0xc000647cc0) (3) Data frame handling\nI0626 22:19:07.276232 3639 log.go:172] (0xc000647cc0) (3) Data frame sent\nI0626 22:19:07.276238 3639 log.go:172] (0xc0009106e0) Data frame received for 3\nI0626 22:19:07.276244 3639 log.go:172] (0xc000647cc0) (3) Data frame handling\nI0626 22:19:07.277922 3639 log.go:172] (0xc0009106e0) Data frame received for 1\nI0626 22:19:07.277938 3639 log.go:172] (0xc00091e000) (1) Data frame handling\nI0626 22:19:07.277951 3639 log.go:172] (0xc00091e000) (1) Data frame sent\nI0626 22:19:07.277968 3639 log.go:172] (0xc0009106e0) (0xc00091e000) Stream removed, broadcasting: 1\nI0626 22:19:07.277997 3639 log.go:172] (0xc0009106e0) Go away received\nI0626 22:19:07.278222 3639 log.go:172] (0xc0009106e0) (0xc00091e000) Stream removed, broadcasting: 1\nI0626 22:19:07.278233 3639 log.go:172] (0xc0009106e0) (0xc000647cc0) Stream removed, broadcasting: 3\nI0626 22:19:07.278240 3639 log.go:172] (0xc0009106e0) (0xc00091e0a0) Stream removed, broadcasting: 5\n" Jun 26 22:19:07.284: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:19:07.284: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:19:07.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:19:07.618: INFO: stderr: "I0626 22:19:07.404913 3660 log.go:172] (0xc000b0e160) (0xc000aac0a0) Create stream\nI0626 22:19:07.404955 3660 log.go:172] (0xc000b0e160) (0xc000aac0a0) Stream added, broadcasting: 1\nI0626 22:19:07.406613 3660 log.go:172] (0xc000b0e160) Reply frame received for 1\nI0626 22:19:07.406653 3660 log.go:172] (0xc000b0e160) (0xc000af4140) Create stream\nI0626 22:19:07.406669 3660 log.go:172] (0xc000b0e160) (0xc000af4140) Stream added, broadcasting: 3\nI0626 22:19:07.407432 3660 log.go:172] (0xc000b0e160) Reply frame received for 3\nI0626 22:19:07.407468 3660 log.go:172] (0xc000b0e160) (0xc000af41e0) Create stream\nI0626 22:19:07.407478 3660 log.go:172] (0xc000b0e160) (0xc000af41e0) Stream added, broadcasting: 5\nI0626 22:19:07.408313 3660 log.go:172] (0xc000b0e160) Reply frame received for 5\nI0626 22:19:07.557661 3660 log.go:172] (0xc000b0e160) Data frame received for 5\nI0626 22:19:07.557681 3660 log.go:172] (0xc000af41e0) (5) Data frame handling\nI0626 22:19:07.557693 3660 log.go:172] (0xc000af41e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:19:07.606887 3660 log.go:172] (0xc000b0e160) Data frame received for 3\nI0626 22:19:07.606933 3660 log.go:172] (0xc000af4140) (3) Data frame handling\nI0626 22:19:07.606972 3660 log.go:172] (0xc000af4140) (3) Data frame sent\nI0626 22:19:07.607190 3660 log.go:172] (0xc000b0e160) Data frame received for 3\nI0626 22:19:07.607219 3660 log.go:172] (0xc000af4140) (3) Data frame handling\nI0626 22:19:07.607485 3660 log.go:172] (0xc000b0e160) Data frame received for 5\nI0626 22:19:07.607508 3660 log.go:172] (0xc000af41e0) (5) Data frame handling\nI0626 22:19:07.609073 3660 log.go:172] (0xc000b0e160) Data frame received for 1\nI0626 22:19:07.609089 3660 log.go:172] (0xc000aac0a0) (1) Data frame handling\nI0626 22:19:07.609097 3660 log.go:172] (0xc000aac0a0) (1) Data frame sent\nI0626 22:19:07.609107 3660 log.go:172] (0xc000b0e160) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0626 22:19:07.609338 3660 log.go:172] (0xc000b0e160) Go away received\nI0626 22:19:07.612250 3660 log.go:172] (0xc000b0e160) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0626 22:19:07.612281 3660 log.go:172] (0xc000b0e160) (0xc000af4140) Stream removed, broadcasting: 3\nI0626 22:19:07.612293 3660 log.go:172] (0xc000b0e160) (0xc000af41e0) Stream removed, broadcasting: 5\n" Jun 26 22:19:07.619: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:19:07.619: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:19:07.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:19:07.856: INFO: stderr: "I0626 22:19:07.754148 3680 log.go:172] (0xc000628fd0) (0xc0006e1c20) Create stream\nI0626 22:19:07.754197 3680 log.go:172] (0xc000628fd0) (0xc0006e1c20) Stream added, broadcasting: 1\nI0626 22:19:07.756175 3680 log.go:172] (0xc000628fd0) Reply frame received for 1\nI0626 22:19:07.756211 3680 log.go:172] (0xc000628fd0) (0xc0007aabe0) Create stream\nI0626 22:19:07.756223 3680 log.go:172] (0xc000628fd0) (0xc0007aabe0) Stream added, broadcasting: 3\nI0626 22:19:07.757022 3680 log.go:172] (0xc000628fd0) Reply frame received for 3\nI0626 22:19:07.757060 3680 log.go:172] (0xc000628fd0) (0xc0006e1cc0) Create stream\nI0626 22:19:07.757069 3680 log.go:172] (0xc000628fd0) (0xc0006e1cc0) Stream added, broadcasting: 5\nI0626 22:19:07.757952 3680 log.go:172] (0xc000628fd0) Reply frame received for 5\nI0626 22:19:07.816639 3680 log.go:172] (0xc000628fd0) Data frame received for 5\nI0626 22:19:07.816666 3680 log.go:172] (0xc0006e1cc0) (5) Data frame handling\nI0626 22:19:07.816688 3680 log.go:172] (0xc0006e1cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:19:07.846300 3680 log.go:172] (0xc000628fd0) Data frame received for 3\nI0626 22:19:07.846334 3680 log.go:172] (0xc0007aabe0) (3) Data frame handling\nI0626 22:19:07.846353 3680 log.go:172] (0xc0007aabe0) (3) Data frame sent\nI0626 22:19:07.846371 3680 log.go:172] (0xc000628fd0) Data frame received for 3\nI0626 22:19:07.846387 3680 log.go:172] (0xc0007aabe0) (3) Data frame handling\nI0626 22:19:07.846404 3680 log.go:172] (0xc000628fd0) Data frame received for 5\nI0626 22:19:07.846420 3680 log.go:172] (0xc0006e1cc0) (5) Data frame handling\nI0626 22:19:07.848172 3680 log.go:172] (0xc000628fd0) Data frame received for 1\nI0626 22:19:07.848210 3680 log.go:172] (0xc0006e1c20) (1) Data frame handling\nI0626 22:19:07.848235 3680 log.go:172] (0xc0006e1c20) (1) Data frame sent\nI0626 22:19:07.848260 3680 log.go:172] (0xc000628fd0) (0xc0006e1c20) Stream removed, broadcasting: 1\nI0626 22:19:07.848287 3680 log.go:172] (0xc000628fd0) Go away received\nI0626 22:19:07.848686 3680 log.go:172] (0xc000628fd0) (0xc0006e1c20) Stream removed, broadcasting: 1\nI0626 22:19:07.848712 3680 log.go:172] (0xc000628fd0) (0xc0007aabe0) Stream removed, broadcasting: 3\nI0626 22:19:07.848725 3680 log.go:172] (0xc000628fd0) (0xc0006e1cc0) Stream removed, broadcasting: 5\n" Jun 26 22:19:07.856: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:19:07.856: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:19:07.856: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 22:19:07.859: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 26 22:19:17.865: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:19:17.865: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:19:17.865: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:19:17.876: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:17.876: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC }] Jun 26 22:19:17.876: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:17.876: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:17.876: INFO: Jun 26 22:19:17.876: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 22:19:18.882: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:18.882: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC }] Jun 26 22:19:18.882: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:18.882: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:18.882: INFO: Jun 26 22:19:18.882: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 22:19:19.887: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:19.887: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:35 +0000 UTC }] Jun 26 22:19:19.887: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:19.887: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:19.888: INFO: Jun 26 22:19:19.888: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 22:19:20.893: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:20.893: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:20.893: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:20.893: INFO: Jun 26 22:19:20.893: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 26 22:19:21.899: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:21.899: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:21.899: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:21.899: INFO: Jun 26 22:19:21.899: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 26 22:19:22.904: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:22.904: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:22.904: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:22.904: INFO: Jun 26 22:19:22.904: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 26 22:19:23.909: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:23.909: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:23.909: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:23.909: INFO: Jun 26 22:19:23.909: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 26 22:19:24.913: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:24.913: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:24.913: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:24.913: INFO: Jun 26 22:19:24.913: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 26 22:19:25.919: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:25.919: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:25.919: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:25.919: INFO: Jun 26 22:19:25.919: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 26 22:19:26.925: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 22:19:26.925: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:26.925: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:19:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 22:18:56 +0000 UTC }] Jun 26 22:19:26.925: INFO: Jun 26 22:19:26.925: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-490 Jun 26 22:19:27.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:19:28.077: INFO: rc: 1 Jun 26 22:19:28.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jun 26 22:19:38.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:19:38.182: INFO: rc: 1 Jun 26 22:19:38.182: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:19:48.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:19:48.303: INFO: rc: 1 Jun 26 22:19:48.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:19:58.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:19:58.397: INFO: rc: 1 Jun 26 22:19:58.397: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:20:08.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:20:08.497: INFO: rc: 1 Jun 26 22:20:08.497: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:20:18.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:20:18.610: INFO: rc: 1 Jun 26 22:20:18.610: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:20:28.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:20:28.717: INFO: rc: 1 Jun 26 22:20:28.717: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:20:38.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:20:38.818: INFO: rc: 1 Jun 26 22:20:38.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:20:48.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:20:48.929: INFO: rc: 1 Jun 26 22:20:48.929: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:20:58.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:20:59.039: INFO: rc: 1 Jun 26 22:20:59.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:21:09.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:21:09.140: INFO: rc: 1 Jun 26 22:21:09.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:21:19.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:21:19.240: INFO: rc: 1 Jun 26 22:21:19.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:21:29.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:21:29.342: INFO: rc: 1 Jun 26 22:21:29.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:21:39.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:21:39.446: INFO: rc: 1 Jun 26 22:21:39.446: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:21:49.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:21:49.550: INFO: rc: 1 Jun 26 22:21:49.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:21:59.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:21:59.654: INFO: rc: 1 Jun 26 22:21:59.654: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:22:09.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:22:09.755: INFO: rc: 1 Jun 26 22:22:09.755: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:22:19.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:22:19.842: INFO: rc: 1 Jun 26 22:22:19.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:22:29.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:22:29.932: INFO: rc: 1 Jun 26 22:22:29.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:22:39.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:22:40.028: INFO: rc: 1 Jun 26 22:22:40.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:22:50.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:22:50.137: INFO: rc: 1 Jun 26 22:22:50.137: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:23:00.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:23:00.272: INFO: rc: 1 Jun 26 22:23:00.272: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:23:10.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:23:10.380: INFO: rc: 1 Jun 26 22:23:10.380: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:23:20.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:23:20.499: INFO: rc: 1 Jun 26 22:23:20.499: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:23:30.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:23:30.607: INFO: rc: 1 Jun 26 22:23:30.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:23:40.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:23:40.709: INFO: rc: 1 Jun 26 22:23:40.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:23:50.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:23:50.809: INFO: rc: 1 Jun 26 22:23:50.809: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:24:00.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:24:00.910: INFO: rc: 1 Jun 26 22:24:00.911: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:24:10.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:24:11.019: INFO: rc: 1 Jun 26 22:24:11.019: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:24:21.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:24:21.127: INFO: rc: 1 Jun 26 22:24:21.127: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 26 22:24:31.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-490 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:24:31.246: INFO: rc: 1 Jun 26 22:24:31.246: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Jun 26 22:24:31.246: INFO: Scaling statefulset ss to 0 Jun 26 22:24:31.254: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 26 22:24:31.256: INFO: Deleting all statefulset in ns statefulset-490 Jun 26 22:24:31.258: INFO: Scaling statefulset ss to 0 Jun 26 22:24:31.266: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 22:24:31.269: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:24:31.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-490" for this suite. • [SLOW TEST:355.598 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":254,"skipped":4279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:24:31.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 22:24:35.482: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:24:35.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8714" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4309,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:24:35.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:24:35.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0" in namespace "projected-7954" to be "success or failure" Jun 26 22:24:35.611: INFO: Pod "downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.615803ms Jun 26 22:24:37.614: INFO: Pod "downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008705781s Jun 26 22:24:39.619: INFO: Pod "downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013076622s STEP: Saw pod success Jun 26 22:24:39.619: INFO: Pod "downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0" satisfied condition "success or failure" Jun 26 22:24:39.622: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0 container client-container: STEP: delete the pod Jun 26 22:24:39.669: INFO: Waiting for pod downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0 to disappear Jun 26 22:24:39.684: INFO: Pod downwardapi-volume-17ba388d-55ce-4a7a-9ad3-33e8694f89c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:24:39.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7954" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:24:39.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 26 22:24:39.749: INFO: PodSpec: initContainers in spec.initContainers Jun 26 22:25:28.969: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2b93bfa8-e04b-421d-9974-874b6c924c99", GenerateName:"", Namespace:"init-container-6717", SelfLink:"/api/v1/namespaces/init-container-6717/pods/pod-init-2b93bfa8-e04b-421d-9974-874b6c924c99", UID:"72539dad-64d4-4a0e-af3a-0b04aae7a790", ResourceVersion:"27554042", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728807079, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"749527438"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nk2cb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002ec9e80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nk2cb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nk2cb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nk2cb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003d70bb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e86de0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003d70c40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003d70c60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003d70c68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003d70c6c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807079, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807079, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807079, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807079, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.40", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.40"}}, StartTime:(*v1.Time)(0xc0024483e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002448420), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ac2070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d10890f900dde2eb94dc7424e38a9f032e2dd2aa6e2ffaafde532096e700ed4a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002448440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002448400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003d70cef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:25:28.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6717" for this suite. • [SLOW TEST:49.290 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":257,"skipped":4348,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:25:28.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 22:25:29.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1574' Jun 26 22:25:29.286: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 26 22:25:29.286: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jun 26 22:25:29.295: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 26 22:25:29.356: INFO: scanned /root for discovery docs: Jun 26 22:25:29.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1574' Jun 26 22:25:45.285: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 26 22:25:45.285: INFO: stdout: "Created e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310\nScaling up e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jun 26 22:25:45.285: INFO: stdout: "Created e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310\nScaling up e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jun 26 22:25:45.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1574' Jun 26 22:25:45.398: INFO: stderr: "" Jun 26 22:25:45.398: INFO: stdout: "e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310-clt9q e2e-test-httpd-rc-k69cz " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Jun 26 22:25:50.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1574' Jun 26 22:25:50.501: INFO: stderr: "" Jun 26 22:25:50.501: INFO: stdout: "e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310-clt9q " Jun 26 22:25:50.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310-clt9q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1574' Jun 26 22:25:50.600: INFO: stderr: "" Jun 26 22:25:50.601: INFO: stdout: "true" Jun 26 22:25:50.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310-clt9q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1574' Jun 26 22:25:50.694: INFO: stderr: "" Jun 26 22:25:50.694: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jun 26 22:25:50.695: INFO: e2e-test-httpd-rc-58b2ce358d6d066b8b8272ca5e6fb310-clt9q is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Jun 26 22:25:50.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1574' Jun 26 22:25:50.805: INFO: stderr: "" Jun 26 22:25:50.805: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:25:50.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1574" for this suite. • [SLOW TEST:21.832 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":258,"skipped":4361,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:25:50.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:26:07.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2510" for this suite. • [SLOW TEST:16.410 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":259,"skipped":4368,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:26:07.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-68fd5cfb-a012-4ea8-82eb-f41192a7b446 STEP: Creating a pod to test consume configMaps Jun 26 22:26:07.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2" in namespace "configmap-6707" to be "success or failure" Jun 26 22:26:07.408: INFO: Pod "pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2": Phase="Pending", Reason="", readiness=false. Elapsed: 73.582509ms Jun 26 22:26:09.411: INFO: Pod "pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076997313s Jun 26 22:26:11.415: INFO: Pod "pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081154631s STEP: Saw pod success Jun 26 22:26:11.415: INFO: Pod "pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2" satisfied condition "success or failure" Jun 26 22:26:11.418: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2 container configmap-volume-test: STEP: delete the pod Jun 26 22:26:11.478: INFO: Waiting for pod pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2 to disappear Jun 26 22:26:11.493: INFO: Pod pod-configmaps-23da62a7-16fc-4273-8dd2-f35fae7753d2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:26:11.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6707" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:26:11.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5717 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5717 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5717 Jun 26 22:26:11.594: INFO: Found 0 stateful pods, waiting for 1 Jun 26 22:26:21.604: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 26 22:26:21.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:26:21.883: INFO: stderr: "I0626 22:26:21.740405 4467 log.go:172] (0xc000b2e6e0) (0xc0009960a0) Create stream\nI0626 22:26:21.740453 4467 log.go:172] (0xc000b2e6e0) (0xc0009960a0) Stream added, broadcasting: 1\nI0626 22:26:21.742825 4467 log.go:172] (0xc000b2e6e0) Reply frame received for 1\nI0626 22:26:21.742867 4467 log.go:172] (0xc000b2e6e0) (0xc000a30000) Create stream\nI0626 22:26:21.742877 4467 log.go:172] (0xc000b2e6e0) (0xc000a30000) Stream added, broadcasting: 3\nI0626 22:26:21.743914 4467 log.go:172] (0xc000b2e6e0) Reply frame received for 3\nI0626 22:26:21.743937 4467 log.go:172] (0xc000b2e6e0) (0xc000996140) Create stream\nI0626 22:26:21.743946 4467 log.go:172] (0xc000b2e6e0) (0xc000996140) Stream added, broadcasting: 5\nI0626 22:26:21.744695 4467 log.go:172] (0xc000b2e6e0) Reply frame received for 5\nI0626 22:26:21.836378 4467 log.go:172] (0xc000b2e6e0) Data frame received for 5\nI0626 22:26:21.836425 4467 log.go:172] (0xc000996140) (5) Data frame handling\nI0626 22:26:21.836466 4467 log.go:172] (0xc000996140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:26:21.872250 4467 log.go:172] (0xc000b2e6e0) Data frame received for 5\nI0626 22:26:21.872291 4467 log.go:172] (0xc000996140) (5) Data frame handling\nI0626 22:26:21.872390 4467 log.go:172] (0xc000b2e6e0) Data frame received for 3\nI0626 22:26:21.872434 4467 log.go:172] (0xc000a30000) (3) Data frame handling\nI0626 22:26:21.872505 4467 log.go:172] (0xc000a30000) (3) Data frame sent\nI0626 22:26:21.872814 4467 log.go:172] (0xc000b2e6e0) Data frame received for 3\nI0626 22:26:21.872863 4467 log.go:172] (0xc000a30000) (3) Data frame handling\nI0626 22:26:21.874911 4467 log.go:172] (0xc000b2e6e0) Data frame received for 1\nI0626 22:26:21.874976 4467 log.go:172] (0xc0009960a0) (1) Data frame handling\nI0626 22:26:21.875083 4467 log.go:172] (0xc0009960a0) (1) Data frame sent\nI0626 22:26:21.875159 4467 log.go:172] (0xc000b2e6e0) (0xc0009960a0) Stream removed, broadcasting: 1\nI0626 22:26:21.875297 4467 log.go:172] (0xc000b2e6e0) Go away received\nI0626 22:26:21.875680 4467 log.go:172] (0xc000b2e6e0) (0xc0009960a0) Stream removed, broadcasting: 1\nI0626 22:26:21.875702 4467 log.go:172] (0xc000b2e6e0) (0xc000a30000) Stream removed, broadcasting: 3\nI0626 22:26:21.875714 4467 log.go:172] (0xc000b2e6e0) (0xc000996140) Stream removed, broadcasting: 5\n" Jun 26 22:26:21.883: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:26:21.883: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:26:21.887: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 26 22:26:31.891: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:26:31.891: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 22:26:31.908: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999309s Jun 26 22:26:32.913: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992502121s Jun 26 22:26:33.917: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987331686s Jun 26 22:26:34.923: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982835403s Jun 26 22:26:35.928: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976730947s Jun 26 22:26:36.932: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972528393s Jun 26 22:26:37.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967773324s Jun 26 22:26:38.942: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963104275s Jun 26 22:26:39.946: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.958212839s Jun 26 22:26:40.951: INFO: Verifying statefulset ss doesn't scale past 1 for another 954.121387ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5717 Jun 26 22:26:41.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:26:42.179: INFO: stderr: "I0626 22:26:42.090065 4490 log.go:172] (0xc00097cf20) (0xc000721e00) Create stream\nI0626 22:26:42.090129 4490 log.go:172] (0xc00097cf20) (0xc000721e00) Stream added, broadcasting: 1\nI0626 22:26:42.092502 4490 log.go:172] (0xc00097cf20) Reply frame received for 1\nI0626 22:26:42.092557 4490 log.go:172] (0xc00097cf20) (0xc00079a000) Create stream\nI0626 22:26:42.092582 4490 log.go:172] (0xc00097cf20) (0xc00079a000) Stream added, broadcasting: 3\nI0626 22:26:42.093539 4490 log.go:172] (0xc00097cf20) Reply frame received for 3\nI0626 22:26:42.093567 4490 log.go:172] (0xc00097cf20) (0xc000721ea0) Create stream\nI0626 22:26:42.093575 4490 log.go:172] (0xc00097cf20) (0xc000721ea0) Stream added, broadcasting: 5\nI0626 22:26:42.094193 4490 log.go:172] (0xc00097cf20) Reply frame received for 5\nI0626 22:26:42.170941 4490 log.go:172] (0xc00097cf20) Data frame received for 5\nI0626 22:26:42.170977 4490 log.go:172] (0xc000721ea0) (5) Data frame handling\nI0626 22:26:42.170994 4490 log.go:172] (0xc000721ea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 22:26:42.171019 4490 log.go:172] (0xc00097cf20) Data frame received for 3\nI0626 22:26:42.171031 4490 log.go:172] (0xc00079a000) (3) Data frame handling\nI0626 22:26:42.171044 4490 log.go:172] (0xc00079a000) (3) Data frame sent\nI0626 22:26:42.171056 4490 log.go:172] (0xc00097cf20) Data frame received for 3\nI0626 22:26:42.171066 4490 log.go:172] (0xc00079a000) (3) Data frame handling\nI0626 22:26:42.171705 4490 log.go:172] (0xc00097cf20) Data frame received for 5\nI0626 22:26:42.171738 4490 log.go:172] (0xc000721ea0) (5) Data frame handling\nI0626 22:26:42.174277 4490 log.go:172] (0xc00097cf20) Data frame received for 1\nI0626 22:26:42.174292 4490 log.go:172] (0xc000721e00) (1) Data frame handling\nI0626 22:26:42.174303 4490 log.go:172] (0xc000721e00) (1) Data frame sent\nI0626 22:26:42.174315 4490 log.go:172] (0xc00097cf20) (0xc000721e00) Stream removed, broadcasting: 1\nI0626 22:26:42.174330 4490 log.go:172] (0xc00097cf20) Go away received\nI0626 22:26:42.174769 4490 log.go:172] (0xc00097cf20) (0xc000721e00) Stream removed, broadcasting: 1\nI0626 22:26:42.174789 4490 log.go:172] (0xc00097cf20) (0xc00079a000) Stream removed, broadcasting: 3\nI0626 22:26:42.174798 4490 log.go:172] (0xc00097cf20) (0xc000721ea0) Stream removed, broadcasting: 5\n" Jun 26 22:26:42.180: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 22:26:42.180: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 22:26:42.199: INFO: Found 1 stateful pods, waiting for 3 Jun 26 22:26:52.204: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 22:26:52.204: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 22:26:52.204: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 26 22:26:52.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:26:52.437: INFO: stderr: "I0626 22:26:52.341351 4510 log.go:172] (0xc00064af20) (0xc0009b6000) Create stream\nI0626 22:26:52.341425 4510 log.go:172] (0xc00064af20) (0xc0009b6000) Stream added, broadcasting: 1\nI0626 22:26:52.344975 4510 log.go:172] (0xc00064af20) Reply frame received for 1\nI0626 22:26:52.345010 4510 log.go:172] (0xc00064af20) (0xc0009b60a0) Create stream\nI0626 22:26:52.345021 4510 log.go:172] (0xc00064af20) (0xc0009b60a0) Stream added, broadcasting: 3\nI0626 22:26:52.346058 4510 log.go:172] (0xc00064af20) Reply frame received for 3\nI0626 22:26:52.346100 4510 log.go:172] (0xc00064af20) (0xc0006fba40) Create stream\nI0626 22:26:52.346114 4510 log.go:172] (0xc00064af20) (0xc0006fba40) Stream added, broadcasting: 5\nI0626 22:26:52.347128 4510 log.go:172] (0xc00064af20) Reply frame received for 5\nI0626 22:26:52.430278 4510 log.go:172] (0xc00064af20) Data frame received for 3\nI0626 22:26:52.430312 4510 log.go:172] (0xc0009b60a0) (3) Data frame handling\nI0626 22:26:52.430329 4510 log.go:172] (0xc0009b60a0) (3) Data frame sent\nI0626 22:26:52.430336 4510 log.go:172] (0xc00064af20) Data frame received for 3\nI0626 22:26:52.430342 4510 log.go:172] (0xc0009b60a0) (3) Data frame handling\nI0626 22:26:52.430386 4510 log.go:172] (0xc00064af20) Data frame received for 5\nI0626 22:26:52.430396 4510 log.go:172] (0xc0006fba40) (5) Data frame handling\nI0626 22:26:52.430404 4510 log.go:172] (0xc0006fba40) (5) Data frame sent\nI0626 22:26:52.430414 4510 log.go:172] (0xc00064af20) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:26:52.430424 4510 log.go:172] (0xc0006fba40) (5) Data frame handling\nI0626 22:26:52.432289 4510 log.go:172] (0xc00064af20) Data frame received for 1\nI0626 22:26:52.432305 4510 log.go:172] (0xc0009b6000) (1) Data frame handling\nI0626 22:26:52.432316 4510 log.go:172] (0xc0009b6000) (1) Data frame sent\nI0626 22:26:52.432325 4510 log.go:172] (0xc00064af20) (0xc0009b6000) Stream removed, broadcasting: 1\nI0626 22:26:52.432514 4510 log.go:172] (0xc00064af20) Go away received\nI0626 22:26:52.432584 4510 log.go:172] (0xc00064af20) (0xc0009b6000) Stream removed, broadcasting: 1\nI0626 22:26:52.432597 4510 log.go:172] (0xc00064af20) (0xc0009b60a0) Stream removed, broadcasting: 3\nI0626 22:26:52.432604 4510 log.go:172] (0xc00064af20) (0xc0006fba40) Stream removed, broadcasting: 5\n" Jun 26 22:26:52.438: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:26:52.438: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:26:52.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:26:52.683: INFO: stderr: "I0626 22:26:52.568852 4533 log.go:172] (0xc00044ea50) (0xc0003c8000) Create stream\nI0626 22:26:52.568916 4533 log.go:172] (0xc00044ea50) (0xc0003c8000) Stream added, broadcasting: 1\nI0626 22:26:52.571938 4533 log.go:172] (0xc00044ea50) Reply frame received for 1\nI0626 22:26:52.571980 4533 log.go:172] (0xc00044ea50) (0xc0009a6000) Create stream\nI0626 22:26:52.571992 4533 log.go:172] (0xc00044ea50) (0xc0009a6000) Stream added, broadcasting: 3\nI0626 22:26:52.573558 4533 log.go:172] (0xc00044ea50) Reply frame received for 3\nI0626 22:26:52.573593 4533 log.go:172] (0xc00044ea50) (0xc0003c80a0) Create stream\nI0626 22:26:52.573606 4533 log.go:172] (0xc00044ea50) (0xc0003c80a0) Stream added, broadcasting: 5\nI0626 22:26:52.574870 4533 log.go:172] (0xc00044ea50) Reply frame received for 5\nI0626 22:26:52.631131 4533 log.go:172] (0xc00044ea50) Data frame received for 5\nI0626 22:26:52.631160 4533 log.go:172] (0xc0003c80a0) (5) Data frame handling\nI0626 22:26:52.631175 4533 log.go:172] (0xc0003c80a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:26:52.673089 4533 log.go:172] (0xc00044ea50) Data frame received for 3\nI0626 22:26:52.673230 4533 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0626 22:26:52.673252 4533 log.go:172] (0xc0009a6000) (3) Data frame sent\nI0626 22:26:52.673313 4533 log.go:172] (0xc00044ea50) Data frame received for 3\nI0626 22:26:52.673329 4533 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0626 22:26:52.674006 4533 log.go:172] (0xc00044ea50) Data frame received for 5\nI0626 22:26:52.674044 4533 log.go:172] (0xc0003c80a0) (5) Data frame handling\nI0626 22:26:52.676067 4533 log.go:172] (0xc00044ea50) Data frame received for 1\nI0626 22:26:52.676092 4533 log.go:172] (0xc0003c8000) (1) Data frame handling\nI0626 22:26:52.676117 4533 log.go:172] (0xc0003c8000) (1) Data frame sent\nI0626 22:26:52.676152 4533 log.go:172] (0xc00044ea50) (0xc0003c8000) Stream removed, broadcasting: 1\nI0626 22:26:52.676222 4533 log.go:172] (0xc00044ea50) Go away received\nI0626 22:26:52.676573 4533 log.go:172] (0xc00044ea50) (0xc0003c8000) Stream removed, broadcasting: 1\nI0626 22:26:52.676598 4533 log.go:172] (0xc00044ea50) (0xc0009a6000) Stream removed, broadcasting: 3\nI0626 22:26:52.676612 4533 log.go:172] (0xc00044ea50) (0xc0003c80a0) Stream removed, broadcasting: 5\n" Jun 26 22:26:52.683: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:26:52.683: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:26:52.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 22:26:52.930: INFO: stderr: "I0626 22:26:52.819820 4555 log.go:172] (0xc00011ac60) (0xc00067be00) Create stream\nI0626 22:26:52.819885 4555 log.go:172] (0xc00011ac60) (0xc00067be00) Stream added, broadcasting: 1\nI0626 22:26:52.822869 4555 log.go:172] (0xc00011ac60) Reply frame received for 1\nI0626 22:26:52.822942 4555 log.go:172] (0xc00011ac60) (0xc0007bc0a0) Create stream\nI0626 22:26:52.822962 4555 log.go:172] (0xc00011ac60) (0xc0007bc0a0) Stream added, broadcasting: 3\nI0626 22:26:52.823981 4555 log.go:172] (0xc00011ac60) Reply frame received for 3\nI0626 22:26:52.824024 4555 log.go:172] (0xc00011ac60) (0xc00067bea0) Create stream\nI0626 22:26:52.824046 4555 log.go:172] (0xc00011ac60) (0xc00067bea0) Stream added, broadcasting: 5\nI0626 22:26:52.825046 4555 log.go:172] (0xc00011ac60) Reply frame received for 5\nI0626 22:26:52.885307 4555 log.go:172] (0xc00011ac60) Data frame received for 5\nI0626 22:26:52.885338 4555 log.go:172] (0xc00067bea0) (5) Data frame handling\nI0626 22:26:52.885356 4555 log.go:172] (0xc00067bea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 22:26:52.918885 4555 log.go:172] (0xc00011ac60) Data frame received for 3\nI0626 22:26:52.918931 4555 log.go:172] (0xc0007bc0a0) (3) Data frame handling\nI0626 22:26:52.918974 4555 log.go:172] (0xc0007bc0a0) (3) Data frame sent\nI0626 22:26:52.919337 4555 log.go:172] (0xc00011ac60) Data frame received for 5\nI0626 22:26:52.919364 4555 log.go:172] (0xc00067bea0) (5) Data frame handling\nI0626 22:26:52.919734 4555 log.go:172] (0xc00011ac60) Data frame received for 3\nI0626 22:26:52.919771 4555 log.go:172] (0xc0007bc0a0) (3) Data frame handling\nI0626 22:26:52.921729 4555 log.go:172] (0xc00011ac60) Data frame received for 1\nI0626 22:26:52.921759 4555 log.go:172] (0xc00067be00) (1) Data frame handling\nI0626 22:26:52.921789 4555 log.go:172] (0xc00067be00) (1) Data frame sent\nI0626 22:26:52.921841 4555 log.go:172] (0xc00011ac60) (0xc00067be00) Stream removed, broadcasting: 1\nI0626 22:26:52.921873 4555 log.go:172] (0xc00011ac60) Go away received\nI0626 22:26:52.922527 4555 log.go:172] (0xc00011ac60) (0xc00067be00) Stream removed, broadcasting: 1\nI0626 22:26:52.922552 4555 log.go:172] (0xc00011ac60) (0xc0007bc0a0) Stream removed, broadcasting: 3\nI0626 22:26:52.922564 4555 log.go:172] (0xc00011ac60) (0xc00067bea0) Stream removed, broadcasting: 5\n" Jun 26 22:26:52.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 22:26:52.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 22:26:52.930: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 22:26:52.933: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 26 22:27:02.942: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:27:02.942: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:27:02.942: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 26 22:27:02.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999124s Jun 26 22:27:03.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991827176s Jun 26 22:27:04.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987975379s Jun 26 22:27:05.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982890728s Jun 26 22:27:06.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978113348s Jun 26 22:27:07.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973108618s Jun 26 22:27:08.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967388731s Jun 26 22:27:09.993: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961921559s Jun 26 22:27:10.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956110772s Jun 26 22:27:12.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.940809ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5717 Jun 26 22:27:13.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:27:13.255: INFO: stderr: "I0626 22:27:13.143773 4575 log.go:172] (0xc000b3a000) (0xc0007554a0) Create stream\nI0626 22:27:13.143866 4575 log.go:172] (0xc000b3a000) (0xc0007554a0) Stream added, broadcasting: 1\nI0626 22:27:13.147122 4575 log.go:172] (0xc000b3a000) Reply frame received for 1\nI0626 22:27:13.147170 4575 log.go:172] (0xc000b3a000) (0xc000755540) Create stream\nI0626 22:27:13.147181 4575 log.go:172] (0xc000b3a000) (0xc000755540) Stream added, broadcasting: 3\nI0626 22:27:13.148319 4575 log.go:172] (0xc000b3a000) Reply frame received for 3\nI0626 22:27:13.148350 4575 log.go:172] (0xc000b3a000) (0xc0006f5ae0) Create stream\nI0626 22:27:13.148361 4575 log.go:172] (0xc000b3a000) (0xc0006f5ae0) Stream added, broadcasting: 5\nI0626 22:27:13.149554 4575 log.go:172] (0xc000b3a000) Reply frame received for 5\nI0626 22:27:13.248734 4575 log.go:172] (0xc000b3a000) Data frame received for 3\nI0626 22:27:13.248766 4575 log.go:172] (0xc000755540) (3) Data frame handling\nI0626 22:27:13.248792 4575 log.go:172] (0xc000b3a000) Data frame received for 5\nI0626 22:27:13.248812 4575 log.go:172] (0xc0006f5ae0) (5) Data frame handling\nI0626 22:27:13.248824 4575 log.go:172] (0xc0006f5ae0) (5) Data frame sent\nI0626 22:27:13.248833 4575 log.go:172] (0xc000b3a000) Data frame received for 5\nI0626 22:27:13.248845 4575 log.go:172] (0xc0006f5ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 22:27:13.248932 4575 log.go:172] (0xc000755540) (3) Data frame sent\nI0626 22:27:13.248999 4575 log.go:172] (0xc000b3a000) Data frame received for 3\nI0626 22:27:13.249012 4575 log.go:172] (0xc000755540) (3) Data frame handling\nI0626 22:27:13.250271 4575 log.go:172] (0xc000b3a000) Data frame received for 1\nI0626 22:27:13.250308 4575 log.go:172] (0xc0007554a0) (1) Data frame handling\nI0626 22:27:13.250328 4575 log.go:172] (0xc0007554a0) (1) Data frame sent\nI0626 22:27:13.250349 4575 log.go:172] (0xc000b3a000) (0xc0007554a0) Stream removed, broadcasting: 1\nI0626 22:27:13.250370 4575 log.go:172] (0xc000b3a000) Go away received\nI0626 22:27:13.250760 4575 log.go:172] (0xc000b3a000) (0xc0007554a0) Stream removed, broadcasting: 1\nI0626 22:27:13.250778 4575 log.go:172] (0xc000b3a000) (0xc000755540) Stream removed, broadcasting: 3\nI0626 22:27:13.250785 4575 log.go:172] (0xc000b3a000) (0xc0006f5ae0) Stream removed, broadcasting: 5\n" Jun 26 22:27:13.255: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 22:27:13.255: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 22:27:13.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:27:13.497: INFO: stderr: "I0626 22:27:13.403756 4596 log.go:172] (0xc00086ea50) (0xc000760000) Create stream\nI0626 22:27:13.403851 4596 log.go:172] (0xc00086ea50) (0xc000760000) Stream added, broadcasting: 1\nI0626 22:27:13.407094 4596 log.go:172] (0xc00086ea50) Reply frame received for 1\nI0626 22:27:13.407170 4596 log.go:172] (0xc00086ea50) (0xc0006719a0) Create stream\nI0626 22:27:13.407195 4596 log.go:172] (0xc00086ea50) (0xc0006719a0) Stream added, broadcasting: 3\nI0626 22:27:13.408202 4596 log.go:172] (0xc00086ea50) Reply frame received for 3\nI0626 22:27:13.408245 4596 log.go:172] (0xc00086ea50) (0xc000760140) Create stream\nI0626 22:27:13.408259 4596 log.go:172] (0xc00086ea50) (0xc000760140) Stream added, broadcasting: 5\nI0626 22:27:13.409612 4596 log.go:172] (0xc00086ea50) Reply frame received for 5\nI0626 22:27:13.488676 4596 log.go:172] (0xc00086ea50) Data frame received for 3\nI0626 22:27:13.488697 4596 log.go:172] (0xc0006719a0) (3) Data frame handling\nI0626 22:27:13.488705 4596 log.go:172] (0xc0006719a0) (3) Data frame sent\nI0626 22:27:13.488709 4596 log.go:172] (0xc00086ea50) Data frame received for 3\nI0626 22:27:13.488715 4596 log.go:172] (0xc0006719a0) (3) Data frame handling\nI0626 22:27:13.489384 4596 log.go:172] (0xc00086ea50) Data frame received for 5\nI0626 22:27:13.489445 4596 log.go:172] (0xc000760140) (5) Data frame handling\nI0626 22:27:13.489534 4596 log.go:172] (0xc000760140) (5) Data frame sent\nI0626 22:27:13.489635 4596 log.go:172] (0xc00086ea50) Data frame received for 5\nI0626 22:27:13.489668 4596 log.go:172] (0xc000760140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 22:27:13.491558 4596 log.go:172] (0xc00086ea50) Data frame received for 1\nI0626 22:27:13.491578 4596 log.go:172] (0xc000760000) (1) Data frame handling\nI0626 22:27:13.491588 4596 log.go:172] (0xc000760000) (1) Data frame sent\nI0626 22:27:13.491599 4596 log.go:172] (0xc00086ea50) (0xc000760000) Stream removed, broadcasting: 1\nI0626 22:27:13.491618 4596 log.go:172] (0xc00086ea50) Go away received\nI0626 22:27:13.492190 4596 log.go:172] (0xc00086ea50) (0xc000760000) Stream removed, broadcasting: 1\nI0626 22:27:13.492232 4596 log.go:172] (0xc00086ea50) (0xc0006719a0) Stream removed, broadcasting: 3\nI0626 22:27:13.492261 4596 log.go:172] (0xc00086ea50) (0xc000760140) Stream removed, broadcasting: 5\n" Jun 26 22:27:13.497: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 22:27:13.497: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 22:27:13.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5717 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 22:27:13.747: INFO: stderr: "I0626 22:27:13.680622 4616 log.go:172] (0xc0000f5600) (0xc000603f40) Create stream\nI0626 22:27:13.680672 4616 log.go:172] (0xc0000f5600) (0xc000603f40) Stream added, broadcasting: 1\nI0626 22:27:13.683154 4616 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0626 22:27:13.683199 4616 log.go:172] (0xc0000f5600) (0xc00057e820) Create stream\nI0626 22:27:13.683212 4616 log.go:172] (0xc0000f5600) (0xc00057e820) Stream added, broadcasting: 3\nI0626 22:27:13.684346 4616 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0626 22:27:13.684420 4616 log.go:172] (0xc0000f5600) (0xc0007ec8c0) Create stream\nI0626 22:27:13.684437 4616 log.go:172] (0xc0000f5600) (0xc0007ec8c0) Stream added, broadcasting: 5\nI0626 22:27:13.685687 4616 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0626 22:27:13.738003 4616 log.go:172] (0xc0000f5600) Data frame received for 3\nI0626 22:27:13.738027 4616 log.go:172] (0xc00057e820) (3) Data frame handling\nI0626 22:27:13.738051 4616 log.go:172] (0xc00057e820) (3) Data frame sent\nI0626 22:27:13.738066 4616 log.go:172] (0xc0000f5600) Data frame received for 3\nI0626 22:27:13.738074 4616 log.go:172] (0xc00057e820) (3) Data frame handling\nI0626 22:27:13.738127 4616 log.go:172] (0xc0000f5600) Data frame received for 5\nI0626 22:27:13.738148 4616 log.go:172] (0xc0007ec8c0) (5) Data frame handling\nI0626 22:27:13.738166 4616 log.go:172] (0xc0007ec8c0) (5) Data frame sent\nI0626 22:27:13.738176 4616 log.go:172] (0xc0000f5600) Data frame received for 5\nI0626 22:27:13.738187 4616 log.go:172] (0xc0007ec8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 22:27:13.740044 4616 log.go:172] (0xc0000f5600) Data frame received for 1\nI0626 22:27:13.740073 4616 log.go:172] (0xc000603f40) (1) Data frame handling\nI0626 22:27:13.740086 4616 log.go:172] (0xc000603f40) (1) Data frame sent\nI0626 22:27:13.740099 4616 log.go:172] (0xc0000f5600) (0xc000603f40) Stream removed, broadcasting: 1\nI0626 22:27:13.740115 4616 log.go:172] (0xc0000f5600) Go away received\nI0626 22:27:13.740600 4616 log.go:172] (0xc0000f5600) (0xc000603f40) Stream removed, broadcasting: 1\nI0626 22:27:13.740623 4616 log.go:172] (0xc0000f5600) (0xc00057e820) Stream removed, broadcasting: 3\nI0626 22:27:13.740643 4616 log.go:172] (0xc0000f5600) (0xc0007ec8c0) Stream removed, broadcasting: 5\n" Jun 26 22:27:13.747: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 22:27:13.747: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 22:27:13.747: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 26 22:27:33.762: INFO: Deleting all statefulset in ns statefulset-5717 Jun 26 22:27:33.766: INFO: Scaling statefulset ss to 0 Jun 26 22:27:33.773: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 22:27:33.775: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:27:33.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5717" for this suite. • [SLOW TEST:82.290 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":261,"skipped":4407,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:27:33.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-1ea3ec09-3995-4be3-8e9a-503b5a74d929 STEP: Creating a pod to test consume configMaps Jun 26 22:27:33.868: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb" in namespace "configmap-8752" to be "success or failure" Jun 26 22:27:33.918: INFO: Pod "pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 49.869071ms Jun 26 22:27:35.966: INFO: Pod "pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098307247s Jun 26 22:27:37.971: INFO: Pod "pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1028827s STEP: Saw pod success Jun 26 22:27:37.971: INFO: Pod "pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb" satisfied condition "success or failure" Jun 26 22:27:37.974: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb container configmap-volume-test: STEP: delete the pod Jun 26 22:27:38.007: INFO: Waiting for pod pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb to disappear Jun 26 22:27:38.011: INFO: Pod pod-configmaps-fc08933b-eb84-45d9-af02-af4fe3cbadeb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:27:38.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8752" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4410,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:27:38.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-20f17de9-906e-4ba0-b65d-2316a90ce581 STEP: Creating a pod to test consume secrets Jun 26 22:27:38.115: INFO: Waiting up to 5m0s for pod "pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b" in namespace "secrets-5311" to be "success or failure" Jun 26 22:27:38.148: INFO: Pod "pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.070947ms Jun 26 22:27:40.152: INFO: Pod "pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037291364s Jun 26 22:27:42.157: INFO: Pod "pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041630251s STEP: Saw pod success Jun 26 22:27:42.157: INFO: Pod "pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b" satisfied condition "success or failure" Jun 26 22:27:42.159: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b container secret-volume-test: STEP: delete the pod Jun 26 22:27:42.279: INFO: Waiting for pod pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b to disappear Jun 26 22:27:42.298: INFO: Pod pod-secrets-283814f0-9b69-4d7c-b579-4b150f7d8c3b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:27:42.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5311" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4410,"failed":0} SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:27:42.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 26 22:27:52.409: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:52.409: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:52.441856 6 log.go:172] (0xc0027908f0) (0xc002257a40) Create stream I0626 22:27:52.441888 6 log.go:172] (0xc0027908f0) (0xc002257a40) Stream added, broadcasting: 1 I0626 22:27:52.444311 6 log.go:172] (0xc0027908f0) Reply frame received for 1 I0626 22:27:52.444356 6 log.go:172] (0xc0027908f0) (0xc002257ae0) Create stream I0626 22:27:52.444371 6 log.go:172] (0xc0027908f0) (0xc002257ae0) Stream added, broadcasting: 3 I0626 22:27:52.445829 6 log.go:172] (0xc0027908f0) Reply frame received for 3 I0626 22:27:52.445889 6 log.go:172] (0xc0027908f0) (0xc002257c20) Create stream I0626 22:27:52.445903 6 log.go:172] (0xc0027908f0) (0xc002257c20) Stream added, broadcasting: 5 I0626 22:27:52.446847 6 log.go:172] (0xc0027908f0) Reply frame received for 5 I0626 22:27:52.531252 6 log.go:172] (0xc0027908f0) Data frame received for 3 I0626 22:27:52.531292 6 log.go:172] (0xc002257ae0) (3) Data frame handling I0626 22:27:52.531317 6 log.go:172] (0xc002257ae0) (3) Data frame sent I0626 22:27:52.531336 6 log.go:172] (0xc0027908f0) Data frame received for 3 I0626 22:27:52.531361 6 log.go:172] (0xc002257ae0) (3) Data frame handling I0626 22:27:52.531379 6 log.go:172] (0xc0027908f0) Data frame received for 5 I0626 22:27:52.531397 6 log.go:172] (0xc002257c20) (5) Data frame handling I0626 22:27:52.532768 6 log.go:172] (0xc0027908f0) Data frame received for 1 I0626 22:27:52.532803 6 log.go:172] (0xc002257a40) (1) Data frame handling I0626 22:27:52.532826 6 log.go:172] (0xc002257a40) (1) Data frame sent I0626 22:27:52.532845 6 log.go:172] (0xc0027908f0) (0xc002257a40) Stream removed, broadcasting: 1 I0626 22:27:52.532866 6 log.go:172] (0xc0027908f0) Go away received I0626 22:27:52.533065 6 log.go:172] (0xc0027908f0) (0xc002257a40) Stream removed, broadcasting: 1 I0626 22:27:52.533092 6 log.go:172] (0xc0027908f0) (0xc002257ae0) Stream removed, broadcasting: 3 I0626 22:27:52.533107 6 log.go:172] (0xc0027908f0) (0xc002257c20) Stream removed, broadcasting: 5 Jun 26 22:27:52.533: INFO: Exec stderr: "" Jun 26 22:27:52.533: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:52.533: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:52.565830 6 log.go:172] (0xc002790f20) (0xc002257ea0) Create stream I0626 22:27:52.565865 6 log.go:172] (0xc002790f20) (0xc002257ea0) Stream added, broadcasting: 1 I0626 22:27:52.567574 6 log.go:172] (0xc002790f20) Reply frame received for 1 I0626 22:27:52.567637 6 log.go:172] (0xc002790f20) (0xc001e2e0a0) Create stream I0626 22:27:52.567647 6 log.go:172] (0xc002790f20) (0xc001e2e0a0) Stream added, broadcasting: 3 I0626 22:27:52.568323 6 log.go:172] (0xc002790f20) Reply frame received for 3 I0626 22:27:52.568350 6 log.go:172] (0xc002790f20) (0xc002257f40) Create stream I0626 22:27:52.568358 6 log.go:172] (0xc002790f20) (0xc002257f40) Stream added, broadcasting: 5 I0626 22:27:52.568909 6 log.go:172] (0xc002790f20) Reply frame received for 5 I0626 22:27:52.637859 6 log.go:172] (0xc002790f20) Data frame received for 3 I0626 22:27:52.637914 6 log.go:172] (0xc001e2e0a0) (3) Data frame handling I0626 22:27:52.637938 6 log.go:172] (0xc001e2e0a0) (3) Data frame sent I0626 22:27:52.637955 6 log.go:172] (0xc002790f20) Data frame received for 3 I0626 22:27:52.637969 6 log.go:172] (0xc001e2e0a0) (3) Data frame handling I0626 22:27:52.637988 6 log.go:172] (0xc002790f20) Data frame received for 5 I0626 22:27:52.638002 6 log.go:172] (0xc002257f40) (5) Data frame handling I0626 22:27:52.639817 6 log.go:172] (0xc002790f20) Data frame received for 1 I0626 22:27:52.639876 6 log.go:172] (0xc002257ea0) (1) Data frame handling I0626 22:27:52.639898 6 log.go:172] (0xc002257ea0) (1) Data frame sent I0626 22:27:52.639910 6 log.go:172] (0xc002790f20) (0xc002257ea0) Stream removed, broadcasting: 1 I0626 22:27:52.639922 6 log.go:172] (0xc002790f20) Go away received I0626 22:27:52.640153 6 log.go:172] (0xc002790f20) (0xc002257ea0) Stream removed, broadcasting: 1 I0626 22:27:52.640196 6 log.go:172] (0xc002790f20) (0xc001e2e0a0) Stream removed, broadcasting: 3 I0626 22:27:52.640215 6 log.go:172] (0xc002790f20) (0xc002257f40) Stream removed, broadcasting: 5 Jun 26 22:27:52.640: INFO: Exec stderr: "" Jun 26 22:27:52.640: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:52.640: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:52.673473 6 log.go:172] (0xc0017a8370) (0xc001e2e3c0) Create stream I0626 22:27:52.673512 6 log.go:172] (0xc0017a8370) (0xc001e2e3c0) Stream added, broadcasting: 1 I0626 22:27:52.675799 6 log.go:172] (0xc0017a8370) Reply frame received for 1 I0626 22:27:52.675846 6 log.go:172] (0xc0017a8370) (0xc001e2e460) Create stream I0626 22:27:52.675870 6 log.go:172] (0xc0017a8370) (0xc001e2e460) Stream added, broadcasting: 3 I0626 22:27:52.676671 6 log.go:172] (0xc0017a8370) Reply frame received for 3 I0626 22:27:52.676716 6 log.go:172] (0xc0017a8370) (0xc001e2e500) Create stream I0626 22:27:52.676736 6 log.go:172] (0xc0017a8370) (0xc001e2e500) Stream added, broadcasting: 5 I0626 22:27:52.677642 6 log.go:172] (0xc0017a8370) Reply frame received for 5 I0626 22:27:52.738534 6 log.go:172] (0xc0017a8370) Data frame received for 5 I0626 22:27:52.738670 6 log.go:172] (0xc001e2e500) (5) Data frame handling I0626 22:27:52.738749 6 log.go:172] (0xc0017a8370) Data frame received for 3 I0626 22:27:52.738798 6 log.go:172] (0xc001e2e460) (3) Data frame handling I0626 22:27:52.738840 6 log.go:172] (0xc001e2e460) (3) Data frame sent I0626 22:27:52.738883 6 log.go:172] (0xc0017a8370) Data frame received for 3 I0626 22:27:52.738916 6 log.go:172] (0xc001e2e460) (3) Data frame handling I0626 22:27:52.740984 6 log.go:172] (0xc0017a8370) Data frame received for 1 I0626 22:27:52.741019 6 log.go:172] (0xc001e2e3c0) (1) Data frame handling I0626 22:27:52.741055 6 log.go:172] (0xc001e2e3c0) (1) Data frame sent I0626 22:27:52.741074 6 log.go:172] (0xc0017a8370) (0xc001e2e3c0) Stream removed, broadcasting: 1 I0626 22:27:52.741092 6 log.go:172] (0xc0017a8370) Go away received I0626 22:27:52.741311 6 log.go:172] (0xc0017a8370) (0xc001e2e3c0) Stream removed, broadcasting: 1 I0626 22:27:52.741341 6 log.go:172] (0xc0017a8370) (0xc001e2e460) Stream removed, broadcasting: 3 I0626 22:27:52.741350 6 log.go:172] (0xc0017a8370) (0xc001e2e500) Stream removed, broadcasting: 5 Jun 26 22:27:52.741: INFO: Exec stderr: "" Jun 26 22:27:52.741: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:52.741: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:52.772660 6 log.go:172] (0xc0017a89a0) (0xc001e2e780) Create stream I0626 22:27:52.772689 6 log.go:172] (0xc0017a89a0) (0xc001e2e780) Stream added, broadcasting: 1 I0626 22:27:52.775604 6 log.go:172] (0xc0017a89a0) Reply frame received for 1 I0626 22:27:52.775637 6 log.go:172] (0xc0017a89a0) (0xc0021e9900) Create stream I0626 22:27:52.775650 6 log.go:172] (0xc0017a89a0) (0xc0021e9900) Stream added, broadcasting: 3 I0626 22:27:52.776622 6 log.go:172] (0xc0017a89a0) Reply frame received for 3 I0626 22:27:52.776670 6 log.go:172] (0xc0017a89a0) (0xc00100d2c0) Create stream I0626 22:27:52.776688 6 log.go:172] (0xc0017a89a0) (0xc00100d2c0) Stream added, broadcasting: 5 I0626 22:27:52.777836 6 log.go:172] (0xc0017a89a0) Reply frame received for 5 I0626 22:27:52.835245 6 log.go:172] (0xc0017a89a0) Data frame received for 5 I0626 22:27:52.835282 6 log.go:172] (0xc00100d2c0) (5) Data frame handling I0626 22:27:52.835299 6 log.go:172] (0xc0017a89a0) Data frame received for 3 I0626 22:27:52.835309 6 log.go:172] (0xc0021e9900) (3) Data frame handling I0626 22:27:52.835321 6 log.go:172] (0xc0021e9900) (3) Data frame sent I0626 22:27:52.835330 6 log.go:172] (0xc0017a89a0) Data frame received for 3 I0626 22:27:52.835335 6 log.go:172] (0xc0021e9900) (3) Data frame handling I0626 22:27:52.836973 6 log.go:172] (0xc0017a89a0) Data frame received for 1 I0626 22:27:52.837015 6 log.go:172] (0xc001e2e780) (1) Data frame handling I0626 22:27:52.837055 6 log.go:172] (0xc001e2e780) (1) Data frame sent I0626 22:27:52.837085 6 log.go:172] (0xc0017a89a0) (0xc001e2e780) Stream removed, broadcasting: 1 I0626 22:27:52.837326 6 log.go:172] (0xc0017a89a0) Go away received I0626 22:27:52.837359 6 log.go:172] (0xc0017a89a0) (0xc001e2e780) Stream removed, broadcasting: 1 I0626 22:27:52.837379 6 log.go:172] (0xc0017a89a0) (0xc0021e9900) Stream removed, broadcasting: 3 I0626 22:27:52.837389 6 log.go:172] (0xc0017a89a0) (0xc00100d2c0) Stream removed, broadcasting: 5 Jun 26 22:27:52.837: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 26 22:27:52.837: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:52.837: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:52.871302 6 log.go:172] (0xc00285dc30) (0xc00100d9a0) Create stream I0626 22:27:52.871351 6 log.go:172] (0xc00285dc30) (0xc00100d9a0) Stream added, broadcasting: 1 I0626 22:27:52.873900 6 log.go:172] (0xc00285dc30) Reply frame received for 1 I0626 22:27:52.873940 6 log.go:172] (0xc00285dc30) (0xc0028520a0) Create stream I0626 22:27:52.873954 6 log.go:172] (0xc00285dc30) (0xc0028520a0) Stream added, broadcasting: 3 I0626 22:27:52.874888 6 log.go:172] (0xc00285dc30) Reply frame received for 3 I0626 22:27:52.874925 6 log.go:172] (0xc00285dc30) (0xc001e2e820) Create stream I0626 22:27:52.874936 6 log.go:172] (0xc00285dc30) (0xc001e2e820) Stream added, broadcasting: 5 I0626 22:27:52.876132 6 log.go:172] (0xc00285dc30) Reply frame received for 5 I0626 22:27:52.936948 6 log.go:172] (0xc00285dc30) Data frame received for 5 I0626 22:27:52.936977 6 log.go:172] (0xc001e2e820) (5) Data frame handling I0626 22:27:52.936995 6 log.go:172] (0xc00285dc30) Data frame received for 3 I0626 22:27:52.937006 6 log.go:172] (0xc0028520a0) (3) Data frame handling I0626 22:27:52.937019 6 log.go:172] (0xc0028520a0) (3) Data frame sent I0626 22:27:52.937069 6 log.go:172] (0xc00285dc30) Data frame received for 3 I0626 22:27:52.937083 6 log.go:172] (0xc0028520a0) (3) Data frame handling I0626 22:27:52.938669 6 log.go:172] (0xc00285dc30) Data frame received for 1 I0626 22:27:52.938689 6 log.go:172] (0xc00100d9a0) (1) Data frame handling I0626 22:27:52.938706 6 log.go:172] (0xc00100d9a0) (1) Data frame sent I0626 22:27:52.938774 6 log.go:172] (0xc00285dc30) (0xc00100d9a0) Stream removed, broadcasting: 1 I0626 22:27:52.938822 6 log.go:172] (0xc00285dc30) Go away received I0626 22:27:52.938858 6 log.go:172] (0xc00285dc30) (0xc00100d9a0) Stream removed, broadcasting: 1 I0626 22:27:52.938869 6 log.go:172] (0xc00285dc30) (0xc0028520a0) Stream removed, broadcasting: 3 I0626 22:27:52.938877 6 log.go:172] (0xc00285dc30) (0xc001e2e820) Stream removed, broadcasting: 5 Jun 26 22:27:52.938: INFO: Exec stderr: "" Jun 26 22:27:52.938: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:52.938: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:52.971794 6 log.go:172] (0xc001e64160) (0xc0026cc000) Create stream I0626 22:27:52.971819 6 log.go:172] (0xc001e64160) (0xc0026cc000) Stream added, broadcasting: 1 I0626 22:27:52.974482 6 log.go:172] (0xc001e64160) Reply frame received for 1 I0626 22:27:52.974546 6 log.go:172] (0xc001e64160) (0xc00100da40) Create stream I0626 22:27:52.974573 6 log.go:172] (0xc001e64160) (0xc00100da40) Stream added, broadcasting: 3 I0626 22:27:52.975560 6 log.go:172] (0xc001e64160) Reply frame received for 3 I0626 22:27:52.975604 6 log.go:172] (0xc001e64160) (0xc00100dc20) Create stream I0626 22:27:52.975619 6 log.go:172] (0xc001e64160) (0xc00100dc20) Stream added, broadcasting: 5 I0626 22:27:52.976595 6 log.go:172] (0xc001e64160) Reply frame received for 5 I0626 22:27:53.032735 6 log.go:172] (0xc001e64160) Data frame received for 3 I0626 22:27:53.032778 6 log.go:172] (0xc00100da40) (3) Data frame handling I0626 22:27:53.032875 6 log.go:172] (0xc00100da40) (3) Data frame sent I0626 22:27:53.032897 6 log.go:172] (0xc001e64160) Data frame received for 3 I0626 22:27:53.032910 6 log.go:172] (0xc00100da40) (3) Data frame handling I0626 22:27:53.032927 6 log.go:172] (0xc001e64160) Data frame received for 5 I0626 22:27:53.032964 6 log.go:172] (0xc00100dc20) (5) Data frame handling I0626 22:27:53.035260 6 log.go:172] (0xc001e64160) Data frame received for 1 I0626 22:27:53.035290 6 log.go:172] (0xc0026cc000) (1) Data frame handling I0626 22:27:53.035313 6 log.go:172] (0xc0026cc000) (1) Data frame sent I0626 22:27:53.035338 6 log.go:172] (0xc001e64160) (0xc0026cc000) Stream removed, broadcasting: 1 I0626 22:27:53.035355 6 log.go:172] (0xc001e64160) Go away received I0626 22:27:53.035523 6 log.go:172] (0xc001e64160) (0xc0026cc000) Stream removed, broadcasting: 1 I0626 22:27:53.035547 6 log.go:172] (0xc001e64160) (0xc00100da40) Stream removed, broadcasting: 3 I0626 22:27:53.035556 6 log.go:172] (0xc001e64160) (0xc00100dc20) Stream removed, broadcasting: 5 Jun 26 22:27:53.035: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 26 22:27:53.035: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:53.035: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:53.067061 6 log.go:172] (0xc0017a9550) (0xc001e2f0e0) Create stream I0626 22:27:53.067089 6 log.go:172] (0xc0017a9550) (0xc001e2f0e0) Stream added, broadcasting: 1 I0626 22:27:53.069334 6 log.go:172] (0xc0017a9550) Reply frame received for 1 I0626 22:27:53.069387 6 log.go:172] (0xc0017a9550) (0xc0028521e0) Create stream I0626 22:27:53.069411 6 log.go:172] (0xc0017a9550) (0xc0028521e0) Stream added, broadcasting: 3 I0626 22:27:53.070293 6 log.go:172] (0xc0017a9550) Reply frame received for 3 I0626 22:27:53.070320 6 log.go:172] (0xc0017a9550) (0xc002852500) Create stream I0626 22:27:53.070330 6 log.go:172] (0xc0017a9550) (0xc002852500) Stream added, broadcasting: 5 I0626 22:27:53.071330 6 log.go:172] (0xc0017a9550) Reply frame received for 5 I0626 22:27:53.135826 6 log.go:172] (0xc0017a9550) Data frame received for 3 I0626 22:27:53.135869 6 log.go:172] (0xc0028521e0) (3) Data frame handling I0626 22:27:53.135889 6 log.go:172] (0xc0028521e0) (3) Data frame sent I0626 22:27:53.135902 6 log.go:172] (0xc0017a9550) Data frame received for 3 I0626 22:27:53.135911 6 log.go:172] (0xc0028521e0) (3) Data frame handling I0626 22:27:53.135937 6 log.go:172] (0xc0017a9550) Data frame received for 5 I0626 22:27:53.135951 6 log.go:172] (0xc002852500) (5) Data frame handling I0626 22:27:53.137708 6 log.go:172] (0xc0017a9550) Data frame received for 1 I0626 22:27:53.137728 6 log.go:172] (0xc001e2f0e0) (1) Data frame handling I0626 22:27:53.137746 6 log.go:172] (0xc001e2f0e0) (1) Data frame sent I0626 22:27:53.137773 6 log.go:172] (0xc0017a9550) (0xc001e2f0e0) Stream removed, broadcasting: 1 I0626 22:27:53.137786 6 log.go:172] (0xc0017a9550) Go away received I0626 22:27:53.137953 6 log.go:172] (0xc0017a9550) (0xc001e2f0e0) Stream removed, broadcasting: 1 I0626 22:27:53.137981 6 log.go:172] (0xc0017a9550) (0xc0028521e0) Stream removed, broadcasting: 3 I0626 22:27:53.137994 6 log.go:172] (0xc0017a9550) (0xc002852500) Stream removed, broadcasting: 5 Jun 26 22:27:53.138: INFO: Exec stderr: "" Jun 26 22:27:53.138: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:53.138: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:53.172468 6 log.go:172] (0xc001bd8420) (0xc00236e500) Create stream I0626 22:27:53.172510 6 log.go:172] (0xc001bd8420) (0xc00236e500) Stream added, broadcasting: 1 I0626 22:27:53.175021 6 log.go:172] (0xc001bd8420) Reply frame received for 1 I0626 22:27:53.175057 6 log.go:172] (0xc001bd8420) (0xc0021e99a0) Create stream I0626 22:27:53.175068 6 log.go:172] (0xc001bd8420) (0xc0021e99a0) Stream added, broadcasting: 3 I0626 22:27:53.176015 6 log.go:172] (0xc001bd8420) Reply frame received for 3 I0626 22:27:53.176155 6 log.go:172] (0xc001bd8420) (0xc0021e9a40) Create stream I0626 22:27:53.176171 6 log.go:172] (0xc001bd8420) (0xc0021e9a40) Stream added, broadcasting: 5 I0626 22:27:53.177054 6 log.go:172] (0xc001bd8420) Reply frame received for 5 I0626 22:27:53.232141 6 log.go:172] (0xc001bd8420) Data frame received for 5 I0626 22:27:53.232165 6 log.go:172] (0xc0021e9a40) (5) Data frame handling I0626 22:27:53.232179 6 log.go:172] (0xc001bd8420) Data frame received for 3 I0626 22:27:53.232185 6 log.go:172] (0xc0021e99a0) (3) Data frame handling I0626 22:27:53.232195 6 log.go:172] (0xc0021e99a0) (3) Data frame sent I0626 22:27:53.232202 6 log.go:172] (0xc001bd8420) Data frame received for 3 I0626 22:27:53.232207 6 log.go:172] (0xc0021e99a0) (3) Data frame handling I0626 22:27:53.233007 6 log.go:172] (0xc001bd8420) Data frame received for 1 I0626 22:27:53.233022 6 log.go:172] (0xc00236e500) (1) Data frame handling I0626 22:27:53.233037 6 log.go:172] (0xc00236e500) (1) Data frame sent I0626 22:27:53.233049 6 log.go:172] (0xc001bd8420) (0xc00236e500) Stream removed, broadcasting: 1 I0626 22:27:53.233063 6 log.go:172] (0xc001bd8420) Go away received I0626 22:27:53.233335 6 log.go:172] (0xc001bd8420) (0xc00236e500) Stream removed, broadcasting: 1 I0626 22:27:53.233359 6 log.go:172] (0xc001bd8420) (0xc0021e99a0) Stream removed, broadcasting: 3 I0626 22:27:53.233379 6 log.go:172] (0xc001bd8420) (0xc0021e9a40) Stream removed, broadcasting: 5 Jun 26 22:27:53.233: INFO: Exec stderr: "" Jun 26 22:27:53.233: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:53.233: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:53.257736 6 log.go:172] (0xc002791290) (0xc002852780) Create stream I0626 22:27:53.257760 6 log.go:172] (0xc002791290) (0xc002852780) Stream added, broadcasting: 1 I0626 22:27:53.260339 6 log.go:172] (0xc002791290) Reply frame received for 1 I0626 22:27:53.260386 6 log.go:172] (0xc002791290) (0xc00236e5a0) Create stream I0626 22:27:53.260399 6 log.go:172] (0xc002791290) (0xc00236e5a0) Stream added, broadcasting: 3 I0626 22:27:53.261477 6 log.go:172] (0xc002791290) Reply frame received for 3 I0626 22:27:53.261510 6 log.go:172] (0xc002791290) (0xc0026cc0a0) Create stream I0626 22:27:53.261520 6 log.go:172] (0xc002791290) (0xc0026cc0a0) Stream added, broadcasting: 5 I0626 22:27:53.262527 6 log.go:172] (0xc002791290) Reply frame received for 5 I0626 22:27:53.342522 6 log.go:172] (0xc002791290) Data frame received for 5 I0626 22:27:53.342562 6 log.go:172] (0xc0026cc0a0) (5) Data frame handling I0626 22:27:53.342583 6 log.go:172] (0xc002791290) Data frame received for 3 I0626 22:27:53.342591 6 log.go:172] (0xc00236e5a0) (3) Data frame handling I0626 22:27:53.342601 6 log.go:172] (0xc00236e5a0) (3) Data frame sent I0626 22:27:53.342610 6 log.go:172] (0xc002791290) Data frame received for 3 I0626 22:27:53.342626 6 log.go:172] (0xc00236e5a0) (3) Data frame handling I0626 22:27:53.344172 6 log.go:172] (0xc002791290) Data frame received for 1 I0626 22:27:53.344197 6 log.go:172] (0xc002852780) (1) Data frame handling I0626 22:27:53.344217 6 log.go:172] (0xc002852780) (1) Data frame sent I0626 22:27:53.344253 6 log.go:172] (0xc002791290) (0xc002852780) Stream removed, broadcasting: 1 I0626 22:27:53.344272 6 log.go:172] (0xc002791290) Go away received I0626 22:27:53.344344 6 log.go:172] (0xc002791290) (0xc002852780) Stream removed, broadcasting: 1 I0626 22:27:53.344357 6 log.go:172] (0xc002791290) (0xc00236e5a0) Stream removed, broadcasting: 3 I0626 22:27:53.344377 6 log.go:172] (0xc002791290) (0xc0026cc0a0) Stream removed, broadcasting: 5 Jun 26 22:27:53.344: INFO: Exec stderr: "" Jun 26 22:27:53.344: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5586 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:27:53.344: INFO: >>> kubeConfig: /root/.kube/config I0626 22:27:53.376862 6 log.go:172] (0xc001bd8a50) (0xc00236eb40) Create stream I0626 22:27:53.376885 6 log.go:172] (0xc001bd8a50) (0xc00236eb40) Stream added, broadcasting: 1 I0626 22:27:53.379537 6 log.go:172] (0xc001bd8a50) Reply frame received for 1 I0626 22:27:53.379578 6 log.go:172] (0xc001bd8a50) (0xc0026cc1e0) Create stream I0626 22:27:53.379595 6 log.go:172] (0xc001bd8a50) (0xc0026cc1e0) Stream added, broadcasting: 3 I0626 22:27:53.380572 6 log.go:172] (0xc001bd8a50) Reply frame received for 3 I0626 22:27:53.380603 6 log.go:172] (0xc001bd8a50) (0xc0021e9ae0) Create stream I0626 22:27:53.380612 6 log.go:172] (0xc001bd8a50) (0xc0021e9ae0) Stream added, broadcasting: 5 I0626 22:27:53.381701 6 log.go:172] (0xc001bd8a50) Reply frame received for 5 I0626 22:27:53.439315 6 log.go:172] (0xc001bd8a50) Data frame received for 5 I0626 22:27:53.439345 6 log.go:172] (0xc0021e9ae0) (5) Data frame handling I0626 22:27:53.439389 6 log.go:172] (0xc001bd8a50) Data frame received for 3 I0626 22:27:53.439417 6 log.go:172] (0xc0026cc1e0) (3) Data frame handling I0626 22:27:53.439442 6 log.go:172] (0xc0026cc1e0) (3) Data frame sent I0626 22:27:53.439460 6 log.go:172] (0xc001bd8a50) Data frame received for 3 I0626 22:27:53.439476 6 log.go:172] (0xc0026cc1e0) (3) Data frame handling I0626 22:27:53.441366 6 log.go:172] (0xc001bd8a50) Data frame received for 1 I0626 22:27:53.441586 6 log.go:172] (0xc00236eb40) (1) Data frame handling I0626 22:27:53.441609 6 log.go:172] (0xc00236eb40) (1) Data frame sent I0626 22:27:53.441632 6 log.go:172] (0xc001bd8a50) (0xc00236eb40) Stream removed, broadcasting: 1 I0626 22:27:53.441654 6 log.go:172] (0xc001bd8a50) Go away received I0626 22:27:53.441859 6 log.go:172] (0xc001bd8a50) (0xc00236eb40) Stream removed, broadcasting: 1 I0626 22:27:53.441934 6 log.go:172] (0xc001bd8a50) (0xc0026cc1e0) Stream removed, broadcasting: 3 I0626 22:27:53.441957 6 log.go:172] (0xc001bd8a50) (0xc0021e9ae0) Stream removed, broadcasting: 5 Jun 26 22:27:53.441: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:27:53.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5586" for this suite. • [SLOW TEST:11.147 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4418,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:27:53.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 26 22:27:53.510: INFO: Waiting up to 5m0s for pod "pod-0198ee85-551c-4296-bc7d-350b3088b38d" in namespace "emptydir-8477" to be "success or failure" Jun 26 22:27:53.514: INFO: Pod "pod-0198ee85-551c-4296-bc7d-350b3088b38d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.76744ms Jun 26 22:27:55.518: INFO: Pod "pod-0198ee85-551c-4296-bc7d-350b3088b38d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00795541s Jun 26 22:27:57.522: INFO: Pod "pod-0198ee85-551c-4296-bc7d-350b3088b38d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011892338s STEP: Saw pod success Jun 26 22:27:57.522: INFO: Pod "pod-0198ee85-551c-4296-bc7d-350b3088b38d" satisfied condition "success or failure" Jun 26 22:27:57.525: INFO: Trying to get logs from node jerma-worker2 pod pod-0198ee85-551c-4296-bc7d-350b3088b38d container test-container: STEP: delete the pod Jun 26 22:27:57.552: INFO: Waiting for pod pod-0198ee85-551c-4296-bc7d-350b3088b38d to disappear Jun 26 22:27:57.556: INFO: Pod pod-0198ee85-551c-4296-bc7d-350b3088b38d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:27:57.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8477" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4435,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:27:57.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:27:57.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062" in namespace "downward-api-184" to be "success or failure" Jun 26 22:27:57.670: INFO: Pod "downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062": Phase="Pending", Reason="", readiness=false. Elapsed: 14.387322ms Jun 26 22:27:59.674: INFO: Pod "downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018934301s Jun 26 22:28:01.679: INFO: Pod "downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023919678s STEP: Saw pod success Jun 26 22:28:01.679: INFO: Pod "downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062" satisfied condition "success or failure" Jun 26 22:28:01.683: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062 container client-container: STEP: delete the pod Jun 26 22:28:01.716: INFO: Waiting for pod downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062 to disappear Jun 26 22:28:01.724: INFO: Pod downwardapi-volume-d8c6c2b3-6e45-4585-adf2-8d22d30b9062 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:01.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-184" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:01.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:17.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5153" for this suite. • [SLOW TEST:16.123 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":267,"skipped":4474,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:17.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 22:28:18.886: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 22:28:20.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807298, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807298, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807298, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728807298, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 22:28:24.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:24.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7211" for this suite. STEP: Destroying namespace "webhook-7211-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.553 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":268,"skipped":4475,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:24.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:28:24.506: INFO: Waiting up to 5m0s for pod "downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054" in namespace "projected-8622" to be "success or failure" Jun 26 22:28:24.525: INFO: Pod "downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054": Phase="Pending", Reason="", readiness=false. Elapsed: 18.925127ms Jun 26 22:28:26.528: INFO: Pod "downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022686883s Jun 26 22:28:28.532: INFO: Pod "downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026258891s STEP: Saw pod success Jun 26 22:28:28.532: INFO: Pod "downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054" satisfied condition "success or failure" Jun 26 22:28:28.535: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054 container client-container: STEP: delete the pod Jun 26 22:28:28.621: INFO: Waiting for pod downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054 to disappear Jun 26 22:28:28.623: INFO: Pod downwardapi-volume-896b1197-f991-4204-ac14-12d75b0c4054 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:28.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8622" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4481,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:28.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 22:28:34.030: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:34.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6988" for this suite. • [SLOW TEST:5.529 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4489,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:34.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 26 22:28:38.819: INFO: Successfully updated pod "annotationupdate61cedadf-f0c7-478d-a9ee-5b18009aeefc" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:40.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1644" for this suite. • [SLOW TEST:6.703 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4497,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:40.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 26 22:28:44.952: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5966 PodName:pod-sharedvolume-2254bf21-3c11-4257-af4b-556a9906843e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 22:28:44.952: INFO: >>> kubeConfig: /root/.kube/config I0626 22:28:44.974595 6 log.go:172] (0xc0027904d0) (0xc0012e1860) Create stream I0626 22:28:44.974623 6 log.go:172] (0xc0027904d0) (0xc0012e1860) Stream added, broadcasting: 1 I0626 22:28:44.977236 6 log.go:172] (0xc0027904d0) Reply frame received for 1 I0626 22:28:44.977275 6 log.go:172] (0xc0027904d0) (0xc0012e19a0) Create stream I0626 22:28:44.977283 6 log.go:172] (0xc0027904d0) (0xc0012e19a0) Stream added, broadcasting: 3 I0626 22:28:44.978426 6 log.go:172] (0xc0027904d0) Reply frame received for 3 I0626 22:28:44.978458 6 log.go:172] (0xc0027904d0) (0xc0012e1a40) Create stream I0626 22:28:44.978470 6 log.go:172] (0xc0027904d0) (0xc0012e1a40) Stream added, broadcasting: 5 I0626 22:28:44.979460 6 log.go:172] (0xc0027904d0) Reply frame received for 5 I0626 22:28:45.041972 6 log.go:172] (0xc0027904d0) Data frame received for 3 I0626 22:28:45.041991 6 log.go:172] (0xc0012e19a0) (3) Data frame handling I0626 22:28:45.041998 6 log.go:172] (0xc0012e19a0) (3) Data frame sent I0626 22:28:45.042003 6 log.go:172] (0xc0027904d0) Data frame received for 3 I0626 22:28:45.042007 6 log.go:172] (0xc0012e19a0) (3) Data frame handling I0626 22:28:45.042043 6 log.go:172] (0xc0027904d0) Data frame received for 5 I0626 22:28:45.042067 6 log.go:172] (0xc0012e1a40) (5) Data frame handling I0626 22:28:45.043527 6 log.go:172] (0xc0027904d0) Data frame received for 1 I0626 22:28:45.043558 6 log.go:172] (0xc0012e1860) (1) Data frame handling I0626 22:28:45.043587 6 log.go:172] (0xc0012e1860) (1) Data frame sent I0626 22:28:45.043615 6 log.go:172] (0xc0027904d0) (0xc0012e1860) Stream removed, broadcasting: 1 I0626 22:28:45.043677 6 log.go:172] (0xc0027904d0) Go away received I0626 22:28:45.043707 6 log.go:172] (0xc0027904d0) (0xc0012e1860) Stream removed, broadcasting: 1 I0626 22:28:45.043722 6 log.go:172] (0xc0027904d0) (0xc0012e19a0) Stream removed, broadcasting: 3 I0626 22:28:45.043736 6 log.go:172] (0xc0027904d0) (0xc0012e1a40) Stream removed, broadcasting: 5 Jun 26 22:28:45.043: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:45.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5966" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":272,"skipped":4497,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:45.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 26 22:28:45.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce" in namespace "downward-api-5977" to be "success or failure" Jun 26 22:28:45.180: INFO: Pod "downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce": Phase="Pending", Reason="", readiness=false. Elapsed: 13.254342ms Jun 26 22:28:47.247: INFO: Pod "downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079507858s Jun 26 22:28:49.265: INFO: Pod "downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098153014s Jun 26 22:28:51.269: INFO: Pod "downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101734836s STEP: Saw pod success Jun 26 22:28:51.269: INFO: Pod "downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce" satisfied condition "success or failure" Jun 26 22:28:51.271: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce container client-container: STEP: delete the pod Jun 26 22:28:51.296: INFO: Waiting for pod downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce to disappear Jun 26 22:28:51.332: INFO: Pod downwardapi-volume-0c321139-b63b-4cf5-8345-0a5f67b037ce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:28:51.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5977" for this suite. • [SLOW TEST:6.288 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4515,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:28:51.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-ngpz STEP: Creating a pod to test atomic-volume-subpath Jun 26 22:28:51.460: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ngpz" in namespace "subpath-1656" to be "success or failure" Jun 26 22:28:51.505: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Pending", Reason="", readiness=false. Elapsed: 45.390516ms Jun 26 22:28:53.509: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04962198s Jun 26 22:28:55.514: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 4.054373937s Jun 26 22:28:57.518: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 6.058109768s Jun 26 22:28:59.542: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 8.082294608s Jun 26 22:29:01.545: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 10.085769559s Jun 26 22:29:03.549: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 12.089761916s Jun 26 22:29:05.554: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 14.094245856s Jun 26 22:29:07.557: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 16.097674218s Jun 26 22:29:09.561: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 18.10164054s Jun 26 22:29:11.566: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 20.105884064s Jun 26 22:29:13.570: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 22.109899765s Jun 26 22:29:15.574: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Running", Reason="", readiness=true. Elapsed: 24.113919347s Jun 26 22:29:17.577: INFO: Pod "pod-subpath-test-secret-ngpz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.117254738s STEP: Saw pod success Jun 26 22:29:17.577: INFO: Pod "pod-subpath-test-secret-ngpz" satisfied condition "success or failure" Jun 26 22:29:17.579: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-ngpz container test-container-subpath-secret-ngpz: STEP: delete the pod Jun 26 22:29:17.662: INFO: Waiting for pod pod-subpath-test-secret-ngpz to disappear Jun 26 22:29:17.665: INFO: Pod pod-subpath-test-secret-ngpz no longer exists STEP: Deleting pod pod-subpath-test-secret-ngpz Jun 26 22:29:17.665: INFO: Deleting pod "pod-subpath-test-secret-ngpz" in namespace "subpath-1656" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:29:17.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1656" for this suite. • [SLOW TEST:26.334 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":274,"skipped":4515,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:29:17.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 26 22:29:17.763: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:29:25.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4867" for this suite. • [SLOW TEST:8.173 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":275,"skipped":4522,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:29:25.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 26 22:29:33.999: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:34.014: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:36.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:36.166: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:38.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:38.018: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:40.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:40.019: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:42.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:42.018: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:44.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:44.018: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:46.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:46.032: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:48.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:48.018: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 22:29:50.014: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 22:29:50.018: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:29:50.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4466" for this suite. • [SLOW TEST:24.180 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:29:50.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-52bb7d48-6a6d-4656-8aa8-5ae0cc00e8ba STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:29:56.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9381" for this suite. • [SLOW TEST:6.121 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4562,"failed":0} [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 26 22:29:56.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0626 22:30:06.268289 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 22:30:06.268: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 26 22:30:06.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1092" for this suite. • [SLOW TEST:10.127 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":278,"skipped":4562,"failed":0} SSJun 26 22:30:06.276: INFO: Running AfterSuite actions on all nodes Jun 26 22:30:06.276: INFO: Running AfterSuite actions on node 1 Jun 26 22:30:06.276: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4853.256 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS